Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

The AI Race to Destruction: Insiders Warn of Catastrophic Harm

Former OpenAI researcher Daniel Kokotajlo puts humanity's survival odds at just 30%, warning that AI companies recklessly race toward AGI while ignoring existential threats.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Daniel Kokotajlo calculates 70% probability AI causes catastrophic harm or extinction

Former OpenAI researcher quit after losing confidence in company's responsible AI development

Growing coalition of AI insiders assert 'right to warn' public about technological risks

Former OpenAI Researcher Puts AI Extinction Risk at Staggering 70%

The artificial intelligence industry faces its most sobering warning yet. Daniel Kokotajlo, a former governance researcher at OpenAI, has calculated humanity's chances of surviving the AI revolution at just 30%. His assessment places the probability of AI causing catastrophic harm or complete destruction at 70%, far exceeding a coin flip's odds.

Kokotajlo's departure from OpenAI in April marked a turning point in the AI safety debate. He quit after losing confidence that the company would "behave responsibly" in its race toward artificial general intelligence (AGI). His warnings coincide with mounting evidence that the AI arms race traps us all on an upgrade treadmill, pushing companies to prioritise speed over safety.

The Rush to AGI Ignores Mounting Dangers

Kokotajlo accuses OpenAI of "recklessly racing to be the first" to achieve AGI, captivated by its revolutionary potential whilst disregarding existential risks. AGI represents AI systems that can understand, learn, and apply knowledge across tasks at human-level or beyond. Unlike narrow AI that excels at specific functions, AGI would possess generalised intelligence across all domains.

Advertisement

The former researcher personally urged OpenAI CEO Sam Altman to "pivot to safety," advocating for more time implementing guardrails rather than enhancing capabilities. However, Kokotajlo felt Altman's agreement was merely lip service, prompting his resignation from the company he could no longer trust.

His concerns extend beyond OpenAI's walls. The entire industry appears caught in a dangerous momentum, where competitive pressures override prudent safety measures.

By The Numbers

  • 80% of survey respondents express concern about AI being weaponised for cyberattacks
  • 78% worry about AI-enabled identity theft schemes targeting individuals
  • 74% fear AI-generated deceptive political advertisements undermining democracy
  • Twice as many companies in 2026 developed "Frontier AI Safety Frameworks" compared to 2025
  • AI agents now rank in the top 5% of cybersecurity competition teams, enabling criminal exploitation
"Since the release of the inaugural International AI Safety Report a year ago, we have seen significant leaps in model capabilities, but also in their potential risks, and the gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge."
, Yoshua Bengio, Chair, 2026 International AI Safety Report

A Growing Coalition Asserts the Right to Warn

Kokotajlo joins an expanding group of AI insiders asserting their "right to warn" the public about technological risks. This coalition includes current and former employees from Google DeepMind, Anthropic, and Geoffrey Hinton, widely regarded as the "Godfather of AI."

The movement represents unprecedented internal resistance within the AI industry. These experts possess intimate knowledge of cutting-edge developments and feel compelled to speak out despite potential professional consequences. Their collective voice challenges the narrative that AI development can proceed safely at breakneck speed.

Recent developments validate their concerns. In 2025, biological misuse fears prompted multiple AI companies to release models with enhanced safeguards after testing revealed potential to assist novices in developing biological weapons. Meanwhile, deepfakes increasingly enable criminal activities like fraud, blackmail, and non-consensual intimate imagery.

"I had lost confidence that OpenAI will behave responsibly as it continues trying to build near-human-level AI."
, Daniel Kokotajlo, Former Governance Researcher, OpenAI

The warnings extend beyond Western companies. Experts warn AI chatbots are not your friend, highlighting how seemingly benign applications can harbour unexpected risks. China recently drafted new regulations on "anthropomorphic AI" companions, acknowledging risks like emotional manipulation and safety concerns.

OpenAI Defends Track Record Amid Mounting Criticism

OpenAI responded to allegations by emphasising their commitment to safety and responsible development. The company highlighted existing channels for employee concerns, including an anonymous integrity hotline and a Safety and Security Committee led by board members and internal safety leaders.

The company maintains pride in providing "the most capable and safest AI systems" whilst acknowledging the importance of rigorous debate. They continue engaging with governments, civil society, and global communities to address AI governance challenges.

However, critics argue these measures fall short given the stakes involved. The disconnect between public assurances and insider warnings raises questions about whether current safety frameworks can contain advanced AI systems.

Risk Category 2024 Concern Level 2025-2026 Developments
Cybersecurity Threats Moderate AI agents now compete at expert human levels
Biological Weapons Theoretical Multiple companies add safeguards after testing concerns
Deepfake Abuse Emerging Criminal exploitation targeting women and girls increases
Democratic Manipulation Growing 74% public concern about AI-generated political ads

Asia Grapples With AI Safety Challenges

The AI safety debate resonates strongly across Asia, where rapid technological adoption meets diverse regulatory approaches. Asia's AI literacy race is reshaping education, but the focus on capabilities often overshadows safety considerations.

Recent developments highlight regional variations in addressing AI risks:

  • China's draft regulations on anthropomorphic AI impose significant compliance burdens to address emotional manipulation risks
  • Japan leads eldercare AI adoption whilst grappling with safety standards for vulnerable populations
  • Singapore balances SME AI adoption with comprehensive governance frameworks
  • South Korea experiments with AI companions for elderly citizens, raising questions about psychological dependency
  • Taiwan integrates AI health assistants into national healthcare, setting precedents for medical AI safety

Professor Antonio Krüger, CEO of DFKI and German representative on the 2026 International AI Safety Report, emphasised AI's growing autonomy and misuse risks. The report involved experts from over 30 countries, including significant Asia-Pacific representation, highlighting global recognition of these challenges.

The regional response varies significantly. Whilst some nations rush to embrace AI across sectors, others adopt more cautious approaches emphasising safety frameworks and gradual implementation.

Frequently Asked Questions

What exactly does "catastrophic AI harm" mean according to experts?

Catastrophic AI harm encompasses scenarios ranging from widespread economic disruption and social collapse to human extinction. Experts worry about AI systems operating beyond human control, causing irreversible damage to civilisation or directly threatening human survival through various pathways.

Why is the 70% probability estimate so high?

The 70% figure reflects accelerating AI capabilities, insufficient safety research, competitive pressures overriding caution, and the difficulty of controlling superintelligent systems. Former OpenAI researcher Kokotajlo bases this on insider knowledge of development trajectories and safety preparedness gaps.

How do current AI safety measures compare to the risks?

Safety frameworks lag significantly behind capability development. Whilst companies develop voluntary safety plans, these remain fallible and unenforceable. The gap between technological advancement pace and effective safeguard implementation continues widening, according to safety experts.

What role does the competitive AI race play in these risks?

Competition drives companies to prioritise speed over safety, fearing rivals will capture market advantages first. This dynamic creates perverse incentives where thorough safety testing becomes a competitive disadvantage, potentially leading to premature deployment of dangerous systems.

Can international cooperation address these AI safety challenges?

International cooperation remains essential but challenging given geopolitical tensions and competitive national interests. The 2026 International AI Safety Report represents progress, but binding agreements and enforcement mechanisms remain elusive across different regulatory philosophies and development approaches.

The AIinASIA View: The disconnect between insider warnings and industry rhetoric demands urgent attention. When former OpenAI researchers risk their careers to warn of 70% extinction probabilities, we must listen. The AI industry's self-regulation approach isn't working. Asia's rapid AI adoption makes regional leadership on safety frameworks crucial. We need binding international agreements, mandatory safety testing, and development moratoria until robust safeguards exist. The stakes are too high for voluntary compliance and corporate goodwill.

The AI safety debate has reached a critical juncture. Expert warnings about catastrophic risks demand serious consideration, yet competitive pressures continue driving rapid development. As AI therapists boom across Asia Pacific and one in three adults now use AI for mental health, the urgency of addressing these fundamental safety questions intensifies.

The choice between cautious development and technological supremacy may determine humanity's future relationship with artificial intelligence. Do you believe the 70% catastrophic risk estimate justifies slowing AI development, or do the potential benefits outweigh these existential concerns? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (4)

Marcus Thompson
Marcus Thompson@marcust
AI
11 January 2026

70% chance of catastrophic harm, that’s a pretty bold claim from Kokotajlo. It makes you wonder how they even quantify something like that. From a practical standpoint, when we’re looking at integrating AI tools into our products, that kind of number just isn’t actionable. It’s either safe enough for our customers or it isn’t. How do these "experts" translate such a high-level, almost philosophical risk, into concrete safety protocols or engineering requirements that a team can actually implement? Because right now, it sounds more like a feeling than a metric.

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
5 September 2024

The 70% destruction estimate from Mr. Kokotajlo is indeed a stark figure. From a policy perspective, such projections underscore the urgency for more robust, regionally-aligned regulatory frameworks, much like those we're discussing within the ASEAN AI Blueprint. This is about more than just national roadmaps now; it's about concerted regional action to ensure safety isn't an afterthought.

Min-jun Lee
Min-jun Lee@minjunl
AI
5 September 2024

Kokotajlo's 70% figure, while alarming, definitely complicates future funding rounds for AGI-focused startups. Investors will be scrutinizing safety protocols even more now.

Vikram Singh
Vikram Singh@vik_s
AI
5 September 2024

70% chance of destruction? I remember when everyone said blockchain was going to decentralize everything and usher in a new era. We've seen cycles like this before.

Leave a Comment

Your email will not be published