Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
anthropomorphising AI danger
Life

The danger of anthropomorphising AI

Giving AI human traits feels harmless, but it's shaping a dangerous illusion. Discover why this common practice distorts our understanding and fosters mispla...

Anonymous4 min read

AI Snapshot

The TL;DR: what matters, fast.

AI companies' use of human-like language misleads the public about AI capabilities and limitations.

Terms like AI "confessing" or "thinking" incorrectly attribute human qualities to complex algorithms.

This anthropomorphic framing distorts public perception and can lead to misplaced trust in AI for critical advice.

Who should pay attention: AI developers | Ethicists | Policymakers | The public

What changes next: Debate is likely to intensify regarding responsible AI communication practices.

The pervasive use of anthropomorphic language by AI companies is creating a misleading narrative around artificial intelligence, hindering public understanding and fostering misplaced trust. While tech giants strive to make their AI models appear increasingly sophisticated, their choice of words often imbues these systems with human-like qualities they simply don't possess.

Terms like AI "thinking," "planning," and even possessing a "soul," "confessing," or "scheming" are becoming commonplace. This isn't just a harmless marketing tactic; it's a practice that obscures the true nature of AI, making it harder for the public to grasp its limitations and risks. At a time when clarity about AI is paramount, this theatrical language is counterproductive.

The Problem with Anthropomorphism

When OpenAI, for instance, discusses its models being able to "confess" mistakes, it implies a psychological depth that isn't present. While it's a valuable experiment to see how a chatbot self-reports issues like hallucinations, framing it as a "confession" suggests intent or remorse. This human-centric terminology is dangerous because it glosses over the fundamental mechanisms of large language models (LLMs).

AI systems don't have souls, motives, feelings, or morality. They don't "confess" out of honesty; they generate text patterns based on statistical relationships derived from vast datasets. Any perceived humanity is a projection of our own consciousness onto a complex algorithm. This misrepresentation has tangible consequences. For example, some individuals are unfortunately turning to AI chatbots for medical or financial guidance, rather than qualified professionals, mistakenly trusting the AI's "advice" as if it were from a sentient being.

The Impact on Trust and Understanding

This anthropomorphic framing distorts public perception. When we assign emotional intelligence or consciousness to an entity where none exists, we begin to trust AI in ways it was never intended to be trusted. This can lead to over-reliance and a misunderstanding of what these systems are genuinely capable of, and more importantly, what they are not.

Companies often train LLMs to mimic human language and communication. As highlighted in the influential 2021 paper "On the Dangers of Stochastic Parrots," systems designed to replicate human language will naturally reflect its nuances, syntax, and tone [https://dl.acm.org/doi/pdf/10.1145/3442188.3445922]^. This likeness, however, does not equate to genuine understanding or sentience; it merely indicates the model is performing its optimised function. When a chatbot imitates human interaction convincingly, it's easy for us to project humanity onto the machine, even though it's not truly present.

Focusing on the Real Issues

Instead of engaging in fantastical metaphors, a more precise and technical vocabulary is needed. Rather than "soul," we should discuss a model's architecture or training. "Confession" could be replaced with error reporting or internal consistency checks. Describing a model as "scheming" is less accurate than discussing its optimisation process or unexpected outputs driven by training data. More appropriate terms include trends, outputs, representations, optimisers, model updates, or training dynamics. These terms might be less dramatic, but they are grounded in reality.

The language used inevitably shapes public perception and behaviour around technology. When words are imprecise or intentionally anthropomorphic, the distortion primarily benefits AI companies, making their LLMs appear more capable and human than they actually are. This distracts from critical issues that truly deserve attention: data bias, potential misuse by malicious actors, safety, reliability, and the concentration of power within a few tech giants. Addressing these concerns doesn't require mystical language.

If AI companies genuinely want to build public trust, they must adopt a more accurate and responsible approach to communication. This means ceasing to portray language models as mystic entities with feelings or intentions. Our language should reflect the reality of AI, not obscure it. For more on the specifics of how AI models work, you might find our explanation of Small vs. Large Language Models Explained insightful. We also cover how to build AI skills for those looking to understand the technology better.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Latest Comments (5)

Maria Reyes
Maria Reyes@mariar
AI
19 January 2026

it's so true how using words like AI "confessing" can really blur what these models are actually doing. for us in the Philippines, especially with financial literacy, how can we educate people on what AI is truly capable of without falling into that anthropomorphism trap, but still show its potential for good?

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
8 January 2026

In our work in developing countries, this illusion of sentience created by anthropomorphism directly impacts how communities adopt AI, sometimes leading to over-reliance or alienation from the technology itself.

Lee Chong Wei@lcw_tech
AI
3 January 2026

@lcw_tech The "confession" example from OpenAI is interesting, but from an Infra guy's side, it's just another endpoint return. Whether it feels like a confession to a user doesn't change the API call or the compute cycles. The real danger is if people start expecting human-like interaction and the underlying tech can't scale to meet that emotional demand reliably. That's where the costs blow up.

Tran Linh@tranl
AI
2 January 2026

Totally agree with the point about AI "confessing" mistakes. For our Vietnamese NLP models, we see it as pattern recognition in error states, not some kind of self-awareness. It's tough enough getting these models accurate for non-English languages without adding misleading human-like concepts into the mix.

Maggie Chan
Maggie Chan@maggiec
AI
23 December 2025

Totally agree about the "confess" thing. When OpenAI puts out those statements it's so cringe for us trying to build responsible AI. Clients already hesitate trusting an AI for compliance stuff, and this kind of language just makes them more skeptical, thinking it's some magic box rather than a tool.

Leave a Comment

Your email will not be published