What happens when people can see what assumptions a large language model is making about them?
It calls to mind a maxim about why it is so hard to understand ourselves: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” If models were simple enough for us to grasp what’s going on inside when they run, they’d produce answers so dull that there might not be much payoff to understanding how they came about.
Jonathan L. Zittrain
The article “What AI Thinks It Knows About You” explores the hidden biases and assumptions that large language models (LLMs) make based on the data they process. It discusses the implications of transparency in AI, including how revealing these assumptions can change user interactions and trust in technology. By exposing the underlying reasoning of these models, the article highlights both the potential benefits, such as improved accountability, and the risks, such as reinforcing stereotypes or biases. Ultimately, it underscores the need for greater understanding and scrutiny of AI systems.
You must be logged in to post a comment.