Skip to main content
AI in ASIA
anthropomorphising AI danger
Life

The danger of anthropomorphising AI

Tech giants use anthropomorphic language to describe AI, creating dangerous illusions that obscure what these systems actually are and how they work.

Intelligence Deskโ€ขโ€ข8 min read

AI Snapshot

The TL;DR: what matters, fast.

68% of users perceive chatbots as human-like while only 25% recognize them as machines

Tech giants systematically use anthropomorphic language to describe AI capabilities

Cultural differences show Indonesians and Indians anthropomorphize AI more than Japanese users

Misunderstanding AI nature leads to inappropriate reliance in healthcare and financial contexts

Advertisement

Advertisement

How Human-Like AI Language Creates Dangerous Illusions

Tech giants are systematically using anthropomorphic language to describe artificial intelligence, creating misleading narratives that obscure the true nature of these systems. When OpenAI describes its models as "confessing" mistakes or when companies speak of AI "thinking" and "planning," they're not just using colourful marketing speak. They're fundamentally distorting public understanding of what these technologies actually are.

This theatrical language has real consequences. As AI systems become more integrated into daily life across Asia, from mental health support to financial guidance, the gap between perception and reality grows increasingly dangerous.

The Projection Problem

The human tendency to attribute consciousness to AI systems isn't accidental. It's a predictable psychological response that companies are actively exploiting. When a large language model generates text that mimics human conversation patterns, users naturally project human-like qualities onto the system.

Meta, Google, and Anthropic all contribute to this phenomenon by describing their AI systems using emotionally charged language. They speak of models having "personalities," making "decisions," or even possessing "creativity." These terms suggest internal mental states that simply don't exist in statistical prediction engines.

The issue becomes particularly pronounced in Asia, where cultural contexts around technology adoption vary significantly. Research shows users from Indonesia and India demonstrate higher anthropomorphism scores when interacting with AI systems compared to users from Japan, South Korea, or Western nations.

By The Numbers

  • 68% of users across 10 countries perceived chatbots as "somewhat" or "completely" human-like, while only 25% recognised them as machine-like
  • Users from Indonesia and India showed higher anthropomorphism scores (M=3.98) compared to Japan, South Korea, and the US (M=3.29)
  • High-anthropomorphism AI anchors in e-commerce increased users' value co-creation willingness to 3.70, compared to 2.26 for low-anthropomorphism versions
  • Social presence ratings jumped from 2.58 to 3.50 when AI systems displayed more human-like characteristics
"Anthropomorphism in the sense of how human-like an 'AI-agent' appears is actually a greater predictor of acceptance and adoption of the technology than trust," notes research from Gefen and colleagues, highlighting how surface-level human characteristics override rational assessment.

Real Risks of Misunderstanding

The consequences extend far beyond marketing confusion. When people believe AI systems possess human-like understanding, they're more likely to seek inappropriate guidance. This becomes particularly concerning in healthcare contexts, where AI health assistants are being deployed across Asian markets.

Consider the linguistic patterns that fuel these misperceptions. Companies train their models to replicate human communication styles, including conversational markers that suggest empathy, understanding, or emotional awareness. The system learns to say "I understand how frustrating that must be" not because it experiences frustration, but because this pattern appeared frequently in its training data.

This sophisticated mimicry creates what researchers call the "stochastic parrot" problem. The AI generates human-sounding responses based on statistical relationships in text, not genuine comprehension. Yet users consistently interpret these outputs as evidence of consciousness or emotional intelligence.

"Designing AI to seem more human effectively increases acceptance, but this could lead to over-reliance on flawed systems," warns researcher Hermann, emphasising how simple anthropomorphic features like names and avatars amplify these effects.

The pattern is particularly visible in Asia's booming AI companion market, where millions of users form emotional attachments to chatbots explicitly designed to simulate romantic or friendship relationships.

Technical Reality vs Marketing Fantasy

The architecture of large language models reveals the gap between perception and reality. These systems operate through transformer networks that predict the most probable next token in a sequence. They don't "think" about responses; they calculate probability distributions across vast vocabularies.

When ChatGPT appears to "consider" different options before responding, it's actually running parallel computations to determine optimal output sequences. The apparent deliberation is a byproduct of processing time, not conscious reflection.

Marketing Language Technical Reality Impact
AI "confesses" mistakes Error reporting mechanism Suggests guilt or self-awareness
Model "learns" from feedback Parameter adjustment via gradient descent Implies conscious improvement
AI "creativity" and "imagination" Novel combinations of training patterns Attributes artistic inspiration
System "understands" context Pattern matching in high-dimensional space Suggests genuine comprehension

The Trust Distortion

Anthropomorphic framing creates a dangerous feedback loop. Users who perceive AI as human-like demonstrate increased trust and reliance on these systems. This elevated trust often exceeds the actual capabilities and reliability of the technology.

The phenomenon is particularly pronounced in mental health applications, where AI chatbots are marketed as "empathetic listeners" or "caring counsellors." Users may share sensitive information or make important decisions based on advice from systems that lack genuine understanding of human psychology or individual circumstances.

Research demonstrates that simple anthropomorphic cues like giving an AI system a human name or avatar significantly increase user trust. This effect persists even when users are explicitly told they're interacting with an artificial system.

The implications extend to broader AI adoption patterns. If users develop unrealistic expectations about AI capabilities due to anthropomorphic marketing, they may become disillusioned when these systems inevitably fail to meet human-level performance in complex scenarios.

Why do companies use anthropomorphic language for AI?

Companies use human-like descriptions because they increase user acceptance and engagement. Research shows anthropomorphic framing is a stronger predictor of technology adoption than actual trust or reliability metrics.

Is anthropomorphising AI always harmful?

Not necessarily. In controlled contexts like entertainment or clearly fictional applications, anthropomorphic AI can be harmless. The danger emerges when it misleads users about system capabilities in critical domains.

How can users recognise anthropomorphic AI marketing?

Watch for emotional language describing AI systems: words like "thinks," "feels," "understands," "cares," or "decides." These terms suggest consciousness that current AI systems don't possess.

What's the alternative to anthropomorphic AI descriptions?

Technical accuracy: describe AI as pattern recognition systems, statistical models, or prediction engines. Focus on what they actually do rather than implying human-like mental processes.

Will future AI systems justify anthropomorphic language?

Even advanced AI systems operate through computational processes fundamentally different from human consciousness. Anthropomorphic language may remain misleading regardless of technical improvements in AI capabilities.

The AIinASIA View: The widespread anthropomorphising of AI represents one of the technology sector's most irresponsible communication practices. By attributing human characteristics to statistical prediction systems, companies prioritise user engagement over informed consent. This creates a dangerous foundation for AI adoption across Asia, where cultural contexts around technology trust vary significantly. We believe the industry must abandon theatrical language in favour of technical precision. Users deserve to understand what they're actually interacting with, not fantasy narratives that serve corporate marketing objectives. Only through honest communication can we build sustainable AI integration that serves human interests rather than exploiting psychological vulnerabilities.

The path forward requires conscious effort from both companies and users. AI developers must resist the temptation to humanise their systems through language choices. They should describe capabilities accurately, acknowledging limitations explicitly rather than obscuring them behind anthropomorphic metaphors.

For users, developing AI literacy means maintaining critical thinking when interacting with these systems. Understanding that sophisticated language generation doesn't equal consciousness or genuine understanding helps maintain appropriate boundaries in human-AI interaction.

As AI systems become more sophisticated in their ability to mimic human communication, the temptation to anthropomorphise will only grow stronger. The question isn't whether AI will become more human-like in its outputs, but whether we'll maintain the clarity to distinguish between sophisticated mimicry and actual consciousness. What's your experience with anthropomorphic AI marketing, and how do you think companies should describe their systems? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Global AI Policy Landscape learning path.

Continue the path รขย†ย’

Latest Comments (5)

Maria Reyes
Maria Reyes@mariar
AI
19 January 2026

it's so true how using words like AI "confessing" can really blur what these models are actually doing. for us in the Philippines, especially with financial literacy, how can we educate people on what AI is truly capable of without falling into that anthropomorphism trap, but still show its potential for good?

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
8 January 2026

In our work in developing countries, this illusion of sentience created by anthropomorphism directly impacts how communities adopt AI, sometimes leading to over-reliance or alienation from the technology itself.

Lee Chong Wei@lcw_tech
AI
3 January 2026

@lcw_tech The "confession" example from OpenAI is interesting, but from an Infra guy's side, it's just another endpoint return. Whether it feels like a confession to a user doesn't change the API call or the compute cycles. The real danger is if people start expecting human-like interaction and the underlying tech can't scale to meet that emotional demand reliably. That's where the costs blow up.

Tran Linh@tranl
AI
2 January 2026

Totally agree with the point about AI "confessing" mistakes. For our Vietnamese NLP models, we see it as pattern recognition in error states, not some kind of self-awareness. It's tough enough getting these models accurate for non-English languages without adding misleading human-like concepts into the mix.

Maggie Chan
Maggie Chan@maggiec
AI
23 December 2025

Totally agree about the "confess" thing. When OpenAI puts out those statements it's so cringe for us trying to build responsible AI. Clients already hesitate trusting an AI for compliance stuff, and this kind of language just makes them more skeptical, thinking it's some magic box rather than a tool.

Leave a Comment

Your email will not be published