The Great AGI Definition Debate: Why Nobody Agrees What 'General Intelligence' Actually Means
The debate over what counts as true Artificial General Intelligence is more unsettled than the technology itself. There is no universally agreed definition of AGI, despite its central place in AI debates. Shifting definitions allow companies and researchers to frame AI progress in ways that suit their own agendas, whilst the absence of clarity risks undermining serious discourse on progress, risks, and responsibilities in AI development.
Why Defining AGI Has Become an Existential Problem
Artificial General Intelligence is often described as the pinnacle of AI, but even this seemingly straightforward statement hides a swamp of competing interpretations. Should AGI mean machines that can think like humans? Or simply those that can outperform us in many, but not all, intellectual tasks? If we cannot settle on what AGI is, how will we ever agree when we have achieved it?
Socrates once observed that the beginning of wisdom lies in the definition of terms. Today's AI community might do well to revisit that advice. Without clarity, we are left with a noisy marketplace where headlines shout of breakthroughs, but insiders quietly admit that the goalposts keep shifting.
The stakes couldn't be higher. As recent analysis from Meta's chief scientist suggests, the timeline for achieving true AGI remains contentious, making clear definitions even more critical for investors, policymakers, and the public.
By The Numbers
- The global AI market is projected to reach $757.58 billion in 2026, growing 19.20% from 2025's $638.23 billion
- Total worldwide AI spending is expected to surpass $2.02 trillion in 2026
- 88% of companies report using AI in at least one business function, up from 78% the previous year
- 72% of organisations are using generative AI in one or more business functions
The Shifting Goalposts of Intelligence
The AI industry has long been split between three stages: today's narrow AI, tomorrow's general AI, and the speculative possibility of superintelligence. Narrow AI, from recommendation engines to translation tools, excels in specific tasks but fails to generalise. AGI is meant to close this gap, functioning across domains with human-level adaptability.
Yet the language around AGI is anything but stable. OpenAI chief Sam Altman, for instance, has variously described AGI as systems that can "tackle increasingly complex problems, at human level, in many fields" and more recently dismissed the term as "not super useful". Such flexibility may suit a commercial narrative, especially when each new model is marketed as inching us towards AGI.
"By reasonable standards, current large language models already constitute AGI," concluded UC San Diego experts Eddy Keming Chen, Mikhail Belkin, Leon Bergen, and David Danks in their 2026 analysis.
This leaves the public and policymakers guessing. The broader implications of how AI definitions shape global discourse are explored in recent research on human-AI differences.
Why Definitions Are Contested Territory
The problem is not only linguistic but philosophical. Intelligence itself resists neat boundaries. Some researchers argue AGI should be defined as machines able to solve all human-solvable problems. Others emphasise adaptability, autonomy, or economic usefulness. The Association for the Advancement of Artificial Intelligence recently conceded that there is no agreed test for AGI's achievement, with some resorting to the phrase "we'll know it when we see it".
"We have built highly capable systems, but we do not understand why we were successful," states Leon Bergen on LLMs' limitations despite capabilities.
This lack of consensus creates risks. Without a shared definition, companies can declare victory on their own terms. A model that beats humans in "economically valuable work" may be hailed as AGI in one narrative, while another insists true AGI requires competence across every intellectual domain.
These are not minor differences but fundamentally divergent visions of AI's purpose. The challenge extends beyond technical capabilities to questions of AI reliability and safety that definitions must address.
| Organisation | AGI Definition Focus | Key Emphasis |
|---|---|---|
| Gartner | Human-level intellectual tasks | Universality across domains |
| OpenAI | Economic value creation | Practical productivity gains |
| IBM | Matching human cognition | Cognitive parity and beyond |
| Wikipedia | Full spectrum capability | Comprehensive task performance |
The Danger of Moving Goalposts
Definitions matter because they anchor expectations. If AGI means "all tasks", then we remain decades away. If it means "many fields", companies can claim progress today. The risk, as critics point out, is that definitions become tailored to fit product cycles rather than scientific milestones.
The following factors complicate AGI definition attempts:
- Intelligence lacks universal measurement standards across cultures and contexts
- Commercial interests incentivise flexible interpretations that favour current capabilities
- Technical achievements often outpace theoretical frameworks for understanding them
- Public expectations are shaped by science fiction rather than scientific reality
- Regulatory frameworks require precise definitions that the field cannot yet provide
As a result, debates about progress, safety, and governance collapse into talking past one another. The metaphor of "moving the cheese" captures this tendency perfectly. In AI, the cheese often shifts to wherever the latest model happens to land.
Should We Retire the Term Altogether?
Some scholars argue that the phrase AGI has become too polluted to be useful. It conjures science-fiction imagery, invites hype, and fuels polarisation. Perhaps, they suggest, we should abandon it altogether in favour of more precise terms. Others counter that AGI has already achieved public recognition and discarding it would only muddy waters further.
The term seems set to persist for now. But if it does, the field must work harder to contain what one writer called the "wilderness of ideas" within a coherent definition. Otherwise, debates on AI's risks and opportunities will remain unmoored. This challenge is particularly acute in Asia, where different cultural and policy contexts add another layer of complexity to AGI discussions.
What exactly constitutes "general" in AGI?
The "general" in AGI typically refers to versatility across domains, unlike narrow AI that excels in specific tasks. However, consensus remains elusive on whether this means human-level performance in all cognitive areas or just broad adaptability across multiple fields without domain-specific training.
How do current AI models compare to AGI definitions?
Current large language models demonstrate broad capabilities but lack true understanding, consistent reasoning, and autonomous goal-setting that most AGI definitions require. They excel at pattern matching and text generation but struggle with novel problem-solving and genuine comprehension across all domains.
Why do tech companies keep changing their AGI definitions?
Companies adjust definitions to align with their current capabilities and market positioning. This allows them to claim progress towards AGI whilst managing investor expectations and regulatory scrutiny. The lack of industry-wide standards enables this flexibility, though it undermines scientific rigour.
Could we achieve AGI without agreeing on its definition?
Theoretically possible but practically problematic. Without clear definitions, we cannot establish proper safety protocols, regulatory frameworks, or research priorities. Multiple systems might be declared "AGI" simultaneously by different organisations using different criteria, creating confusion and potential risks.
What role should governments play in defining AGI?
Governments need working definitions for regulation and policy-making, but shouldn't dictate scientific terms. Instead, they should facilitate dialogue between researchers, companies, and civil society to establish consensus. International cooperation will be essential given AGI's global implications and cross-border development efforts.
Next time a headline proclaims that AGI is imminent, the sensible question is not when but what the writer means by AGI. Until the field settles its terms, the debate will remain less about machines and more about semantics. Yet those semantics may decide how governments regulate, how investors allocate billions, and how societies prepare for what comes next. The relationship between AI and human intelligence hangs in the balance of these definitional debates.
So what definition of AGI would you accept, and why does it matter for Asia's AI future? Drop your take in the comments below.








Latest Comments (3)
It is interesting to see Sam Altman's shifting stance on AGI. In ASEAN, our digital transformation frameworks emphasize clear definitions, particularly when discussing emergent technologies that could impact policy and regulation. If the very leaders in the field are still debating what constitutes AGI, it presents a challenge for governments trying to establish foundational principles for AI development and adoption. We need a common language to build trust and ensure responsible innovation across the region, aligning with goals set forth in the ASEAN Digital Masterplan 2025. This fluidity makes long-term strategic planning more complex.
@kavya hey so this article talks about how Sam Altman at OpenAI thinks AGI is not that useful as a term now. but if it's not useful, then why was it so central for so long? like they made a whole company partly based on achieving it right? does anyone know if companies just change their narrative when the goals seem too far off or if there's a real shift in how they conceptualize these things internally? just trying to understand the actual motivations behind these changing definitions.
@harryw: it's interesting how Sam Altman's stance on AGI has apparently shifted, from defining it to dismissing it as "not super useful." as a CS student, that makes me wonder if these big tech leaders are influencing the academic conversation, or is it the other way around? like, are we just chasing whatever definition suits the current industry narrative?
Leave a Comment