Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

OpenAI: Superintelligent AI is 'just around the corner'

OpenAI predicts superintelligent AI breakthrough within two years, reaching 900M users while pushing for global safety collaboration.

Intelligence DeskIntelligence Desk••4 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI claims AI systems are 80% of the way to matching human AI researchers

Company predicts superintelligent AI breakthrough by 2028 with 900M weekly users

Calls for global AI safety collaboration framework to manage rapid advancement risks

OpenAI Claims Superintelligent AI Breakthrough Within Two Years

OpenAI has declared that artificial intelligence systems are rapidly approaching human-level capabilities across all domains, with the company's leadership predicting major AI-driven discoveries by 2028. The announcement comes as the AI giant reports explosive user growth, reaching 900 million weekly active users and positioning itself at the centre of a global race towards artificial general intelligence.

The San Francisco-based company argues that current AI models already outperform humans on complex reasoning tasks, claiming these systems are "about 80% of the way to an AI researcher." This assessment follows OpenAI's remarkable financial trajectory, with analysts predicting the company will achieve $30 billion in revenue by 2026.

"By the end of 2028, more of the world's intellectual capacity could reside inside data centres than outside them," said Sam Altman, OpenAI CEO, at the AI Impact Summit 2026 in India.

The implications extend far beyond chatbots and customer service applications. OpenAI envisions AI systems capable of independent scientific discovery, potentially revolutionising fields from climate modelling to drug development within the next four years.

Advertisement

Global Collaboration: The New AI Safety Imperative

Recognising the profound risks accompanying such rapid advancement, OpenAI has outlined an ambitious framework for international cooperation on AI safety. The company advocates for a coordinated global response similar to how cybersecurity infrastructure evolved alongside the internet.

The proposed collaboration framework includes four key pillars:

  • Safety partnerships between governments and AI laboratories from the earliest development stages, establishing oversight and technical safeguards before deployment
  • Shared safety protocols amongst leading AI companies, with open publication of evaluation methods to prevent competitive pressure from compromising safety standards
  • Resilient AI safety infrastructure that mirrors cybersecurity's multi-layered approach, incorporating diverse tools, protocols, and response teams
  • Public accountability measures including regular reporting and global monitoring of AI's real-world impacts to inform evidence-based policy decisions

This call for cooperation comes amid growing concerns about AI safety expertise departing major companies. Recent high-profile exits have raised questions about whether commercial pressures are undermining safety priorities at leading AI laboratories.

The company's emphasis on collaboration reflects broader industry recognition that AI development has evolved beyond individual corporate capabilities, requiring coordinated international oversight similar to nuclear technology or climate policy.

By The Numbers

  • OpenAI's ChatGPT serves 900 million weekly active users, alongside 50 million consumer subscribers and 9 million paying business customers
  • AI capabilities costs are dropping by 40 times annually, making advanced AI increasingly accessible globally
  • China's generative AI market is projected to reach $70.4 billion by 2030 with a 45.1% compound annual growth rate
  • Combined revenue forecast for OpenAI, Anthropic, and xAI reaches $110 billion by December 2026
  • OpenAI raised $110 billion in funding at a $730 billion pre-money valuation, including major investments from Amazon, SoftBank, and NVIDIA

From Discovery Engine to Global Utility

OpenAI positions AI as fundamentally transforming from a productivity tool into a discovery engine capable of generating new knowledge. This shift represents a departure from current public perception, which remains largely focused on conversational AI and basic automation tasks.

Current AI Applications Predicted 2028 Capabilities Timeline
Text generation and chatbots Independent scientific research 2026-2028
Image and video creation Novel drug discovery 2027-2028
Code assistance Climate solution development 2026-2028
Customer service automation Personalised global education 2026-2027

The company predicts small AI-driven discoveries as early as 2026, escalating to major breakthroughs by 2028. This timeline aligns with broader industry expectations about how AI reasoning models are evolving beyond simple pattern matching towards genuine problem-solving capabilities.

"The gap between how most people are using AI and what AI is presently capable of is immense," according to OpenAI's recent policy paper on frontier AI coordination.

This capability gap presents both opportunities and challenges for global adoption. While advanced AI could democratise access to sophisticated analytical tools, it also raises questions about workforce adaptation and economic disruption across multiple sectors.

Building AI Resilience: Learning from Cybersecurity

OpenAI's proposed "AI resilience ecosystem" draws explicit parallels to cybersecurity infrastructure development. Rather than seeking to eliminate all AI risks, the approach focuses on building robust systems that reduce risks to manageable levels while preserving innovation benefits.

The cybersecurity analogy proves particularly relevant given the internet's successful transition from experimental network to global infrastructure. Early internet development faced similar concerns about security, privacy, and societal impact, ultimately resolved through collaborative standards development and regulatory frameworks.

National governments play a crucial role in this vision, encouraging the development of comprehensive safety frameworks while avoiding stifling regulation. This balanced approach mirrors ongoing efforts in regions like Asia, where structured regulation focuses on safety and control rather than innovation prevention.

The company envisions AI access becoming a fundamental utility comparable to electricity or clean water. This transformation would require significant infrastructure investment and coordinated planning across multiple jurisdictions and sectors.

Healthcare represents a particularly promising application area, with OpenAI recently expanding into medical AI tools alongside competitors like Anthropic. The potential for AI-powered personalised medicine could revolutionise healthcare delivery, particularly in underserved regions with limited medical expertise.

Frequently Asked Questions

What exactly does OpenAI mean by "superintelligent AI"?

OpenAI refers to AI systems that exceed human cognitive abilities across all domains, including scientific research, creative problem-solving, and strategic reasoning. These systems would be capable of independent discovery and innovation rather than simply following programmed instructions.

How realistic is OpenAI's 2028 timeline for major AI breakthroughs?

Industry experts remain divided on the timeline, though most agree rapid progress is likely. The 40-fold annual cost reduction in AI capabilities suggests significant technical advancement, though breakthrough timing remains inherently unpredictable in emerging technologies.

What specific safety measures does OpenAI propose for superintelligent AI?

OpenAI advocates for multi-layered safety infrastructure including government oversight, shared industry protocols, technical safeguards built into AI systems, and continuous monitoring of real-world impacts through international cooperation frameworks.

Will superintelligent AI replace human workers entirely?

OpenAI suggests AI will augment rather than replace human capabilities in most cases, though significant workforce adaptation will be required. The company emphasises AI creating new meanings of work rather than simply eliminating jobs.

How does OpenAI's approach differ from competitors like Google and Anthropic?

While technical capabilities remain similar across leading AI companies, OpenAI emphasises global cooperation and utility-style AI access. This contrasts with more proprietary approaches from competitors, though convergence on safety standards appears likely across the industry.

The AIinASIA View: OpenAI's superintelligence predictions feel both ambitious and inevitable given current AI progress rates. However, our region faces unique challenges in preparing for this transition. Asian governments must balance innovation encouragement with safety oversight, while businesses need practical frameworks for AI adoption without premature workforce displacement. The company's emphasis on global cooperation is encouraging, but success depends on meaningful participation from Asian stakeholders rather than Western-dominated standard-setting. We believe the 2028 timeline is aggressive but achievable, making preparation urgent rather than optional.

The path towards superintelligent AI presents humanity with perhaps its greatest coordination challenge since nuclear weapons development. OpenAI's vision of collaborative safety infrastructure offers hope, but implementation requires unprecedented global cooperation across technical, regulatory, and social dimensions.

Success depends not only on technical breakthroughs but on our collective ability to govern transformative technology responsibly. As AI capabilities continue expanding at breakneck pace, the window for establishing effective governance frameworks may be narrower than we think. What role should your country play in shaping the global AI safety conversation? Drop your take in the comments below.

â—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (2)

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
10 December 2025

OpenAI calling for shared standards and resilience systems reminds me of discussions we've had at KAIST regarding a pan-Asian AI regulatory framework. Especially with countries like Singapore and even China accelerating their own national AI strategies, avoiding a "race to the bottom" across APAC is critical. I wonder if there's an opportunity for a regional body to lead on publishing joint evaluations.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
16 November 2025

OpenAI claiming AIs are "about 80% of the way to an AI researcher" is a huge statement but it totally glosses over cultural and linguistic nuances. For Indic languages, building robust NLP models still requires immense human research to label data and fine-tune for local dialects. We're a long way from an AI doing that autonomously.

Leave a Comment

Your email will not be published