Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
anthropomorphising AI danger
Life

The danger of anthropomorphising AI

Giving AI human traits feels harmless, but it's shaping a dangerous illusion. Discover why this common practice distorts our understanding and fosters mispla...

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

AI companies' use of human-like language misleads the public about AI capabilities and limitations.

Terms like AI "confessing" or "thinking" incorrectly attribute human qualities to complex algorithms.

This anthropomorphic framing distorts public perception and can lead to misplaced trust in AI for critical advice.

Who should pay attention: AI developers | Ethicists | Policymakers | The public

What changes next: Debate is likely to intensify regarding responsible AI communication practices.

The pervasive use of anthropomorphic language by AI companies is creating a misleading narrative around artificial intelligence, hindering public understanding and fostering misplaced trust. While tech giants strive to make their AI models appear increasingly sophisticated, their choice of words often imbues these systems with human-like qualities they simply don't possess.

Terms like AI "thinking," "planning," and even possessing a "soul," "confessing," or "scheming" are becoming commonplace. This isn't just a harmless marketing tactic; it's a practice that obscures the true nature of AI, making it harder for the public to grasp its limitations and risks. At a time when clarity about AI is paramount, this theatrical language is counterproductive.

The Problem with Anthropomorphism

When OpenAI, for instance, discusses its models being able to "confess" mistakes, it implies a psychological depth that isn't present. While it's a valuable experiment to see how a chatbot self-reports issues like hallucinations, framing it as a "confession" suggests intent or remorse. This human-centric terminology is dangerous because it glosses over the fundamental mechanisms of large language models (LLMs).

AI systems don't have souls, motives, feelings, or morality. They don't "confess" out of honesty; they generate text patterns based on statistical relationships derived from vast datasets. Any perceived humanity is a projection of our own consciousness onto a complex algorithm. This misrepresentation has tangible consequences. For example, some individuals are unfortunately turning to AI chatbots for medical or financial guidance, rather than qualified professionals, mistakenly trusting the AI's "advice" as if it were from a sentient being.

The Impact on Trust and Understanding

This anthropomorphic framing distorts public perception. When we assign emotional intelligence or consciousness to an entity where none exists, we begin to trust AI in ways it was never intended to be trusted. This can lead to over-reliance and a misunderstanding of what these systems are genuinely capable of, and more importantly, what they are not.

Companies often train LLMs to mimic human language and communication. As highlighted in the influential 2021 paper "On the Dangers of Stochastic Parrots," systems designed to replicate human language will naturally reflect its nuances, syntax, and tone [https://dl.acm.org/doi/pdf/10.1145/3442188.3445922]^. This likeness, however, does not equate to genuine understanding or sentience; it merely indicates the model is performing its optimised function. When a chatbot imitates human interaction convincingly, it's easy for us to project humanity onto the machine, even though it's not truly present.

Focusing on the Real Issues

Instead of engaging in fantastical metaphors, a more precise and technical vocabulary is needed. Rather than "soul," we should discuss a model's architecture or training. "Confession" could be replaced with error reporting or internal consistency checks. Describing a model as "scheming" is less accurate than discussing its optimisation process or unexpected outputs driven by training data. More appropriate terms include trends, outputs, representations, optimisers, model updates, or training dynamics. These terms might be less dramatic, but they are grounded in reality.

The language used inevitably shapes public perception and behaviour around technology. When words are imprecise or intentionally anthropomorphic, the distortion primarily benefits AI companies, making their LLMs appear more capable and human than they actually are. This distracts from critical issues that truly deserve attention: data bias, potential misuse by malicious actors, safety, reliability, and the concentration of power within a few tech giants. Addressing these concerns doesn't require mystical language.

If AI companies genuinely want to build public trust, they must adopt a more accurate and responsible approach to communication. This means ceasing to portray language models as mystic entities with feelings or intentions. Our language should reflect the reality of AI, not obscure it. For more on the specifics of how AI models work, you might find our explanation of Small vs. Large Language Models Explained insightful. We also cover how to build AI skills for those looking to understand the technology better.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...