Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Definitions of Artificial General Intelligence
Life

Deliberating on the Many Definitions of Artificial General Intelligence

This article explores the contested definitions of Artificial General Intelligence, why they matter, and how shifting language shapes AI discourse in Asia and beyond. It highlights competing visions from industry and academia, while questioning whether the term AGI should be retired or refined.

Intelligence Desk5 min read

The debate over what counts as true Artificial General Intelligence is more unsettled than the technology itself.

There is no universally agreed definition of Artificial General Intelligence (AGI), despite its central place in AI debates. Shifting definitions allow companies and researchers to frame AI progress in ways that suit their own agendas. The absence of clarity risks undermining serious discourse on progress, risks, and responsibilities in AI development.

Why Defining AGI Matters

Artificial General Intelligence (AGI) is often described as the pinnacle of AI, but even this seemingly straightforward statement hides a swamp of competing interpretations. Should AGI mean machines that can think like humans? Or simply those that can outperform us in many, but not all, intellectual tasks? If we cannot settle on what AGI is, how will we ever agree when we have achieved it?

Socrates once observed that the beginning of wisdom lies in the definition of terms. Today’s AI community might do well to revisit that advice. Without clarity, we are left with a noisy marketplace where headlines shout of breakthroughs, but insiders quietly admit that the cheese keeps being moved.

The Shifting Goalposts of Pinnacle AI

The AI industry has long been split between three stages: today’s narrow AI, tomorrow’s general AI, and the speculative possibility of superintelligence. Narrow AI, from recommendation engines to translation tools, excels in specific tasks but fails to generalise. AGI is meant to close this gap, functioning across domains with human-level adaptability. Superintelligence, in turn, refers to systems that vastly exceed human capacity.

Yet the language around AGI is anything but stable. OpenAI chief Sam Altman, for instance, has variously described AGI as systems that can “tackle increasingly complex problems, at human level, in many fields” and more recently dismissed the term as “not super useful”. Such flexibility may suit a commercial narrative, especially when each new model is marketed as inching us towards AGI. But it leaves the public and policymakers guessing. This aligns with discussions about How People Really Use AI in 2025.

Why Definitions Are Contested

The problem is not only linguistic but philosophical. Intelligence itself resists neat boundaries. Some researchers argue AGI should be defined as machines able to solve all human‑solvable problems. Others emphasise adaptability, autonomy, or economic usefulness. The Association for the Advancement of Artificial Intelligence (AAAI) recently conceded that there is no agreed test for AGI’s achievement, with some resorting to the phrase “we’ll know it when we see it”.

This lack of consensus creates risks. Without a shared definition, companies can declare victory on their own terms. A model that beats humans in “economically valuable work” may be hailed as AGI in one narrative, while another insists true AGI requires competence across every intellectual domain. These are not minor differences but fundamentally divergent visions of AI’s purpose, echoing concerns raised in conversations about AI cognitive colonialism.

Competing Definitions in Practice

A quick survey illustrates the spread:

Gartner (2024): AGI is the hypothetical machine intelligence able to perform any intellectual task that humans can, across real or virtual environments. Wikipedia (2025): AGI is AI capable of performing the full spectrum of cognitively demanding tasks at or above human proficiency. IBM (2024): AGI marks the stage where AI systems can match or exceed human cognitive abilities across any task, representing the ultimate goal of AI research. OpenAI Charter: AGI is defined as highly autonomous systems that outperform humans at most economically valuable work.

The contrast is stark. Some highlight universality, others economic productivity. Some stress parity with humans, others surpassing them. No two capture quite the same spirit. For a deeper dive into the technical aspects, a report by the Future of Humanity Institute at Oxford University provides further context on the definitions and implications of AGI.

The Danger of Moving the Cheese

Definitions matter because they anchor expectations. If AGI means “all tasks”, then we remain decades away. If it means “many fields”, companies can claim progress today. The risk, as critics point out, is that definitions become tailored to fit product cycles rather than scientific milestones. As a result, debates about progress, safety, and governance collapse into talking past one another.

The metaphor of “moving the cheese” captures this tendency. In AI, the cheese often shifts to wherever the latest model happens to land. This not only confuses outsiders but erodes trust in the field itself. This dynamic is also relevant when discussing Why AI Won't Replace You If You Evolve.

Should We Retire the Term?

Some scholars argue that the phrase AGI has become too polluted to be useful. It conjures science‑fiction imagery, invites hype, and fuels polarisation. Perhaps, they suggest, we should abandon it altogether in favour of more precise terms. Others counter that AGI has already achieved public recognition and discarding it would only muddy waters further.

For now, the term seems set to persist. But if it does, the field must work harder to enclose what one writer called the “wilderness of ideas” within a coherent definition. Otherwise, debates on AI’s risks and opportunities will remain unmoored. For example, discussions around Deliberating on the Many Definitions of Artificial General Intelligence highlight the ongoing challenges.

Next time a headline proclaims that AGI is imminent, the sensible question is not when but what the writer means by AGI. Until the field settles its terms, the debate will remain less about machines and more about semantics. And yet, those semantics may decide how governments regulate, how investors allocate billions, and how societies prepare for what comes next.

So, what definition of AGI would you accept?

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (3)

Chen Ming
Chen Ming@chenming
AI
14 October 2025

@chenming It's interesting how the article points out Sam Altman's shifting definitions for AGI. From my perspective covering AI in China, we see similar flexibility, but sometimes with a different emphasis. Here, the focus often leans into practical applications and what capabilities a system has rather than abstract definitions. Companies are less worried about a pure "human-level" philosophical debate and more about whether their large models can actually generalize well enough for vertical industry solutions, say, in manufacturing or healthcare. They might not use the exact term AGI as much, but the underlying push for broader, more adaptable intelligence is definitely there, even if the goalposts are also moving, just maybe not in such publicly articulated ways.

Yuki Tanaka
Yuki Tanaka@yukit
AI
13 October 2025

i agree, the lack of a stable definition for agi makes it difficult to compare research. if altman says agi is "not super useful", what are the metrics or benchmarks openai is using internally to track progress towards something like human-level problem solving? are they sharing those?

Somchai Wongsa@somchaiw
AI
30 September 2025

The article rightly points out the definitional swamp. From a policy perspective, this ambiguity around AGI is precisely why we need frameworks like the ASEAN Digital Economy Framework to provide some common ground. When the industry itself cannot agree on what they are building, how can governments effectively regulate or plan for the future impacts?

Leave a Comment

Your email will not be published