Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI Safety Research
Business

AI Safety Experts Flee OpenAI: Is AGI Around the Corner?

Explore the recent exodus of AI safety researchers from OpenAI and its implications for the future of AI Safety Research.

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI experienced an exodus of safety researchers concerned about the company's preparedness for artificial general intelligence (AGI).

Prominent researchers, like Ilya Sutskever and Jan Leike, departed due to a lack of trust in OpenAI's commitment to safe and responsible AI development.

These former OpenAI researchers continue to pursue AI safety at other organizations or through new ventures, indicating ongoing concerns within the field.

Who should pay attention: AI safety researchers | Policymakers | AI company executives

What changes next: Debate is likely to intensify regarding AI safety protocols and AGI development timelines.

About half of OpenAI's AGI safety researchers have left the company.,Former researcher Daniel Kokotajlo believes OpenAI is close to AGI but unprepared for its risks.,Departing experts remain committed to AI, moving to competitors or starting new ventures.

Artificial Intelligence (AI) is growing fast in Asia and around the world. One big name in AI is OpenAI. But recently, something surprising happened. Many of the company's safety researchers left. Why did they leave? And what does this mean for the future of AI?

A Surprising Exodus

About half of OpenAI's AGI/ASI safety researchers have left the company. Daniel Kokotajlo, a former OpenAI safety researcher, talked to Fortune magazine about this. He said that these researchers left because OpenAI is "fairly close" to developing artificial general intelligence (AGI) but isn't ready to handle the risks that come with it. You can read more about the definitions of Artificial General Intelligence in our detailed article.

Kokotajlo didn't give specific reasons for all the resignations. But he believes they align with his own views. He said there's a "chilling effect" on publishing research about AGI risks within the company. He also noted that the communications and lobbying wings of OpenAI have more influence on what's appropriate to publish.

Key Departures:

Jan Hendrik Kirchner,Collin Burns,Jeffrey Wu,Jonathan Uesato,Steven Bills,Yuri Burda,Todor Markov,John Schulman (OpenAI co-founder),Ilya Sutskever (Chief scientist),Jan Leike (Superalignment team leader)

Why Did They Leave?

Kokotajlo said these departures weren't a "coordinated thing" but rather people "individually giving up." The resignations of Ilya Sutskever and Jan Leike were particularly significant. They jointly led the company's "superalignment" team focused on future AI system safety. OpenAI disbanded this team after their departure. This highlights a growing concern about Why ProSocial AI Is The New ESG.

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." - Letter to Governor Newsom

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." - Letter to Governor Newsom

Where Did They Go?

The departing researchers didn't leave AI altogether. They just left OpenAI. Leike and Schulman moved to safety research roles at OpenAI competitor Anthropic. Before leaving OpenAI, Schulman said he believed AGI would be possible in two to three years. Sutskever founded his own startup to develop safe superintelligent AI.

This suggests that these experts still see potential in AI technology. They just no longer view OpenAI as the right place to pursue it.

OpenAI and Regulation

Kokotajlo expressed disappointment, but not surprise, that OpenAI opposed California's SB 1047 bill. This bill aims to regulate advanced AI system risks. He co-signed a letter to Governor Newsom criticizing OpenAI's stance. The letter called it a betrayal of the company's original plans to thoroughly assess AGI's long-term risks for developing regulations and laws. This mirrors discussions seen in Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means. For more on global AI governance, consider reviewing the OECD AI Principles.

The Future of AI Safety

So, what does this all mean for the future of AI safety? It's clear that there are differing opinions on how to manage AGI risks. But it's also clear that many experts are committed to ensuring AI is developed safely and responsibly. AI is a fast-changing field. It's important to stay informed about the latest developments.

Comment and Share:

What do you think about the future of AI safety? Do you believe AGI is around the corner? Share your thoughts in the comments below. And don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (3)

Kenji Suzuki
Kenji Suzuki@kenjis
AI
3 February 2026

@kenjis: "half of OpenAI's AGI safety researchers" leaving while "fairly close" to AGI is… concerning. In robotics, we simulate risk constantly. To have so many safety specialists exit just as the real-world deployment of such complex systems becomes imminent suggests a critical disconnect in risk assessment or strategy.

Jasmine Koh@jasminek
AI
22 January 2026

It's interesting to hear Daniel Kokotajlo's perspective on the "chilling effect" on internal research visibility, especially regarding AGI risks. While individual departures can accumulate, the disbanding of the superalignment team after Leike and Sutskever's exit suggests a more systemic shift. This brings up concerns about how internal research governance structures align with stated safety goals, a topic I'm actually presenting on next month at the Responsible AI Forum.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
19 November 2024

the idea of a "chilling effect" on internal research at OpenAI, as Kokotajlo mentioned, is concerning. it reminds me of how difficult it can be for novel safety considerations relevant to, say, Indic language models to gain traction if they don't align with dominant commercial or research narratives. it suggests a systemic issue beyond just one company, about how priorities are set.

Leave a Comment

Your email will not be published