The pervasive use of anthropomorphic language by AI companies is creating a misleading narrative around artificial intelligence, hindering public understanding and fostering misplaced trust. While tech giants strive to make their AI models appear increasingly sophisticated, their choice of words often imbues these systems with human-like qualities they simply don't possess.
Terms like AI "thinking," "planning," and even possessing a "soul," "confessing," or "scheming" are becoming commonplace. This isn't just a harmless marketing tactic; it's a practice that obscures the true nature of AI, making it harder for the public to grasp its limitations and risks. At a time when clarity about AI is paramount, this theatrical language is counterproductive.
The Problem with Anthropomorphism
When OpenAI, for instance, discusses its models being able to "confess" mistakes, it implies a psychological depth that isn't present. While it's a valuable experiment to see how a chatbot self-reports issues like hallucinations, framing it as a "confession" suggests intent or remorse. This human-centric terminology is dangerous because it glosses over the fundamental mechanisms of large language models (LLMs).
AI systems don't have souls, motives, feelings, or morality. They don't "confess" out of honesty; they generate text patterns based on statistical relationships derived from vast datasets. Any perceived humanity is a projection of our own consciousness onto a complex algorithm. This misrepresentation has tangible consequences. For example, some individuals are unfortunately turning to AI chatbots for medical or financial guidance, rather than qualified professionals, mistakenly trusting the AI's "advice" as if it were from a sentient being.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
The Impact on Trust and Understanding
This anthropomorphic framing distorts public perception. When we assign emotional intelligence or consciousness to an entity where none exists, we begin to trust AI in ways it was never intended to be trusted. This can lead to over-reliance and a misunderstanding of what these systems are genuinely capable of, and more importantly, what they are not.
Companies often train LLMs to mimic human language and communication. As highlighted in the influential 2021 paper "On the Dangers of Stochastic Parrots," systems designed to replicate human language will naturally reflect its nuances, syntax, and tone [https://dl.acm.org/doi/pdf/10.1145/3442188.3445922]^. This likeness, however, does not equate to genuine understanding or sentience; it merely indicates the model is performing its optimised function. When a chatbot imitates human interaction convincingly, it's easy for us to project humanity onto the machine, even though it's not truly present.
Focusing on the Real Issues
Instead of engaging in fantastical metaphors, a more precise and technical vocabulary is needed. Rather than "soul," we should discuss a model's architecture or training. "Confession" could be replaced with error reporting or internal consistency checks. Describing a model as "scheming" is less accurate than discussing its optimisation process or unexpected outputs driven by training data. More appropriate terms include trends, outputs, representations, optimisers, model updates, or training dynamics. These terms might be less dramatic, but they are grounded in reality.
The language used inevitably shapes public perception and behaviour around technology. When words are imprecise or intentionally anthropomorphic, the distortion primarily benefits AI companies, making their LLMs appear more capable and human than they actually are. This distracts from critical issues that truly deserve attention: data bias, potential misuse by malicious actors, safety, reliability, and the concentration of power within a few tech giants. Addressing these concerns doesn't require mystical language.
If AI companies genuinely want to build public trust, they must adopt a more accurate and responsible approach to communication. This means ceasing to portray language models as mystic entities with feelings or intentions. Our language should reflect the reality of AI, not obscure it. For more on the specifics of how AI models work, you might find our explanation of Small vs. Large Language Models Explained insightful. We also cover how to build AI skills for those looking to understand the technology better.










Latest Comments (6)
we tried to give our customer service AI a "personality" and en fait, it just confused everyone even more. The clients really hated it.
this is so on point. i was just talking to my co-founder about how to keep our dev team from getting too attached to our chatbot's 'personality' when we were working on our fintech app.
👀 my HR department was just talking about this yesterday at our town hall with the new chatbot. everyone wants to give it a name.
Misplaced trust?
that bit about it fostering misplaced trust I totally agree. makes me think about how quickly people assume AI has empathy 🧐
i get what the article's saying but sometimes seeing ai in a more familiar light actually helps people engage with it, rather than just seeing it as cold hard tech. think about how many people started understanding computers better when they got user-friendly interfaces, not just command lines. it’s about accessibility too, not just illusion i think. maybe the danger is overblown a bit when you consider the upside for broader public adoption and understanding, especially for those new to tech 👀
Leave a Comment