Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI Destruction Warnings
Life

The AI Race to Destruction: Insiders Warn of Catastrophic Harm

AI experts, including a former OpenAI employee, warn of the potential destruction or catastrophic harm caused by AI. The concerns centre around the pursuit of AGI and the disregard for safety measures.

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

A former OpenAI researcher, Daniel Kokotajlo, warns there is a 70% chance AI could cause catastrophic harm or destruction to humanity.

Kokotajlo and other AI experts accuse companies like OpenAI of recklessly pursuing artificial general intelligence (AGI) without sufficient safety measures.

Despite internal and external calls for caution, OpenAI asserts its commitment to developing capable and safe AI systems while encouraging debate.

Who should pay attention: AI developers | Ethicists | Policymakers | The public

What changes next: Debate is likely to intensify regarding AI safety and the pursuit of AGI.

Former OpenAI employee, Daniel Kokotajlo, proports AI destruction warnings. Kokotajilo warns of a 70% chance that AI will destroy or catastrophically harm humanity.,Kokotajlo and other AI experts assert their "right to warn" the public about the risks posed by AI.,OpenAI is accused of ignoring safety concerns in the pursuit of artificial general intelligence (AGI).

AI's Doomsday Prediction: 70% Chance of Destruction

Daniel Kokotajlo, a former governance researcher at OpenAI, has issued a chilling warning. He believes that the odds of artificial intelligence (AI) either destroying or causing catastrophic harm to humankind are greater than a coin flip, specifically around 70%. These concerns echo broader discussions about AI's Secret Revolution: Trends You Can't Miss and the potential for unintended consequences.

The Race to AGI: Ignoring the Risks

Kokotajlo's concerns stem from OpenAI's pursuit of artificial general intelligence (AGI). AGI is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond a human being. Understanding the various Deliberating on the Many Definitions of Artificial General Intelligence is crucial to grasping the scope of these warnings.

Kokotajlo accuses OpenAI of ignoring the monumental risks posed by AGI. He claims that the company is "recklessly racing to be the first" to develop AGI, captivated by its possibilities but disregarding the potential dangers.

Asserting the 'Right to Warn'

Kokotajlo is not alone in his concerns. He, along with other former and current employees at Google DeepMind, Anthropic, and Geoffrey Hinton, the so-called "Godfather of AI," are asserting their "right to warn" the public about the risks posed by AI. This movement highlights the growing debate around Why ProSocial AI Is The New ESG.

A Call for Safety: Urging OpenAI's CEO

Kokotajlo's concerns about AI's potential harm led him to personally urge OpenAI's CEO, Sam Altman, to "pivot to safety." He wanted the company to spend more time implementing guardrails to reign in the technology rather than continuing to make it smarter.

However, Kokotajlo felt that Altman's agreement was merely lip service. Fed up, Kokotajlo quit the firm in April, stating that he had "lost confidence that OpenAI will behave responsibly" as it continues trying to build near-human-level AI.

OpenAI's Response: Engaging in Rigorous Debate

In response to these allegations, OpenAI stated that they are proud of their track record in providing the most capable and safest AI systems. They agree that rigorous debate is crucial and will continue to engage with governments, civil society and other communities around the world. They also highlighted their avenues for employees to express their concerns, including an anonymous integrity hotline and a Safety and Security Committee led by members of their board and safety leaders from the company. For further reading on AI safety, the European Commission's Ethics Guidelines for Trustworthy AI provides a comprehensive framework.

The Future of AI: A Call to Action

The warnings from Kokotajlo and other AI experts underscore the need for a more cautious and responsible approach to AI development. As we race towards AGI, it is crucial to prioritise safety and address the potential risks posed by this powerful technology. This sentiment is increasingly reflected in global discussions, including those around North Asia: Diverse Models of Structured Governance for AI.

Comment and Share:

What are your thoughts on the warnings from AI experts about the potential destruction or catastrophic harm caused by AI? Do you think we are doing enough to prioritise safety in AI development? We'd love to hear your thoughts. Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Marcus Thompson
Marcus Thompson@marcust
AI
11 January 2026

70% chance of catastrophic harm, that’s a pretty bold claim from Kokotajlo. It makes you wonder how they even quantify something like that. From a practical standpoint, when we’re looking at integrating AI tools into our products, that kind of number just isn’t actionable. It’s either safe enough for our customers or it isn’t. How do these "experts" translate such a high-level, almost philosophical risk, into concrete safety protocols or engineering requirements that a team can actually implement? Because right now, it sounds more like a feeling than a metric.

Vikram Singh
Vikram Singh@vik_s
AI
5 September 2024

70% chance of destruction? I remember when everyone said blockchain was going to decentralize everything and usher in a new era. We've seen cycles like this before.

Min-jun Lee
Min-jun Lee@minjunl
AI
5 September 2024

Kokotajlo's 70% figure, while alarming, definitely complicates future funding rounds for AGI-focused startups. Investors will be scrutinizing safety protocols even more now.

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
5 September 2024

The 70% destruction estimate from Mr. Kokotajlo is indeed a stark figure. From a policy perspective, such projections underscore the urgency for more robust, regionally-aligned regulatory frameworks, much like those we're discussing within the ASEAN AI Blueprint. This is about more than just national roadmaps now; it's about concerted regional action to ensure safety isn't an afterthought.

Leave a Comment

Your email will not be published