Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Business

AI Safety Experts Flee OpenAI: Is AGI Around the Corner?

OpenAI loses half its AGI safety researchers in mass exodus as experts warn the company is unprepared for imminent artificial general intelligence risks.

Intelligence DeskIntelligence Deskโ€ขโ€ข4 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI lost 50% of its AGI safety researchers including co-founder John Schulman and chief scientist Ilya Sutskever

Departing experts cite concerns that OpenAI is close to AGI but unprepared for associated risks

Safety researchers are moving to Anthropic and founding new companies focused on responsible AI development

Mass Exodus From AI Safety Teams Signals Growing AGI Concerns

OpenAI has lost approximately half of its artificial general intelligence safety researchers in a dramatic wave of departures that's reshaping the AI safety landscape. The exodus, which includes co-founder John Schulman and chief scientist Ilya Sutskever, reflects deepening concerns about the company's approach to managing AGI risks.

Daniel Kokotajlo, a former OpenAI safety researcher, told Fortune that these departures stem from the belief that OpenAI is "fairly close" to developing AGI but remains unprepared for the associated risks. The company's disbanded superalignment team and shifting priorities have left many safety experts questioning whether commercial pressures are overriding caution.

High-Profile Departures Signal Deeper Issues

The resignation wave began with Sutskever and Jan Leike, who jointly led OpenAI's superalignment team focused on future AI system safety. OpenAI dissolved the team immediately after their departure, with Leike later joining Anthropic and publicly criticising the company's direction.

Advertisement

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," said Jan Leike, former OpenAI superalignment team leader, who now heads safety research at Anthropic.

The departures weren't limited to OpenAI. In early 2026, xAI lost 12 staffers, while Anthropic saw its own high-profile resignation when Mrinank Sharma, former head of safeguards research, stepped down citing ethical concerns.

"Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions," said Mrinank Sharma in his February 2026 resignation from Anthropic.

By The Numbers

  • Approximately 50% of OpenAI's AGI safety researchers have left the company
  • 12 staffers departed from xAI in early 2026 over safety concerns
  • OpenAI slowed hiring growth in January 2026, with CEO Sam Altman citing AI efficiency gains
  • The superalignment team, focused on future AI safety, was completely disbanded after leadership departures
  • Multiple resignations occurred across Anthropic, OpenAI, and xAI in the week prior to February 12, 2026

Where Safety Experts Are Heading Next

Despite leaving their previous positions, these researchers haven't abandoned AI development entirely. They're redirecting their efforts toward organisations they believe prioritise safety more effectively.

Key departures include:

  • Jan Leike moved to Anthropic to lead safety research efforts
  • John Schulman, OpenAI co-founder, joined Anthropic for safety-focused work
  • Ilya Sutskever founded Safe Superintelligence Inc to develop safe AGI systems
  • Multiple researchers transitioned to startups focused on responsible AI development

The migration pattern suggests these experts remain committed to AGI development but seek environments that align with their safety priorities. Many are finding homes at companies like Anthropic, which has positioned itself as a safety-first alternative to OpenAI's more aggressive commercial approach.

Regulatory Tensions and Corporate Opposition

The safety concerns extend beyond internal company dynamics to broader regulatory debates. Former OpenAI researchers have criticised the company's opposition to California's SB 1047 bill, which aims to regulate advanced AI system risks.

Company Regulatory Stance Key Safety Initiatives
OpenAI Opposed California SB 1047 Disbanded superalignment team
Anthropic Supported safety regulations Recruiting former OpenAI safety staff
xAI Limited public stance Facing internal safety staff departures

Kokotajlo and other former researchers co-signed a letter to Governor Newsom describing OpenAI's regulatory opposition as a betrayal of the company's original commitment to thoroughly assess AGI's long-term risks. This tension reflects broader debates about how governments should approach AI regulation.

AGI Timeline Predictions Fuel Urgency

The departing researchers' concerns are amplified by their belief that AGI development is accelerating rapidly. Before leaving OpenAI, Schulman predicted AGI would be achievable within two to three years. This timeline has created urgency around safety preparations that many experts believe are inadequate.

The safety researchers point to a "chilling effect" on publishing AGI risk research within OpenAI, suggesting that commercial and communications teams now have more influence over what research gets published. This shift toward corporate messaging over scientific transparency has contributed to the growing exodus of safety-focused staff.

Asia's role in this global AI safety discussion remains significant, with major partnerships like OpenAI's Stargate deal with South Korean tech giants highlighting the region's central position in AGI development infrastructure.

Why are so many AI safety researchers leaving major companies?

Researchers cite concerns about companies prioritising commercial goals over safety measures, inadequate preparation for AGI risks, and restrictions on publishing safety research that could help the broader AI community understand and mitigate potential dangers.

What is the superalignment team and why does its dissolution matter?

OpenAI's superalignment team focused on ensuring future AI systems remain aligned with human values and controllable. Its dissolution after key leadership departures signals reduced institutional commitment to long-term AI safety research.

Are these researchers abandoning AI development entirely?

No, most are joining competitors like Anthropic or starting new ventures focused on safe AI development. They remain committed to advancing AI technology but seek environments that prioritise safety alongside commercial goals.

How close is artificial general intelligence according to these experts?

Former OpenAI researchers suggest AGI could arrive within two to three years, making current safety preparations critically important. This timeline has intensified debates about whether companies are adequately prepared for AGI risks.

What role does regulation play in these departures?

Many departing researchers support stronger AI regulations like California's SB 1047 bill, while their former companies often oppose such measures. This regulatory divide reflects deeper disagreements about balancing innovation speed with safety precautions.

The AIinASIA View: The mass exodus of safety researchers from leading AI companies represents more than corporate reshuffling, it signals a fundamental crisis in how the industry approaches AGI development. When half of OpenAI's safety team abandons ship, we should listen carefully to their warnings. The concentration of AI development in companies that prioritise speed over safety creates systemic risks that extend far beyond individual organisations. Asia's growing role in AI infrastructure through partnerships like SoftBank's massive data centre investments means our region cannot remain passive observers in this safety debate. We need proactive engagement with these concerns before AGI becomes reality.

The departure of so many safety experts from leading AI companies should serve as a wake-up call for the entire industry. Their concerns about inadequate AGI preparations and corporate priorities deserve serious consideration, especially given their insider knowledge of current AI capabilities. The fact that these researchers are continuing their work at safety-focused organisations rather than leaving the field entirely suggests there are paths forward for responsible AGI development.

As Asia continues expanding its role in global AI development through educational partnerships and infrastructure investments, the region has an opportunity to champion safety-first approaches to AGI development. The exodus of safety researchers offers valuable lessons about balancing commercial ambitions with responsible development practices.

What's your view on the balance between AI development speed and safety measures? Do you think these researcher departures reflect legitimate concerns or overcautious attitudes toward AGI risks? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path รขย†ย’

Latest Comments (3)

Kenji Suzuki
Kenji Suzuki@kenjis
AI
3 February 2026

@kenjis: "half of OpenAI's AGI safety researchers" leaving while "fairly close" to AGI isโ€ฆ concerning. In robotics, we simulate risk constantly. To have so many safety specialists exit just as the real-world deployment of such complex systems becomes imminent suggests a critical disconnect in risk assessment or strategy.

Jasmine Koh@jasminek
AI
22 January 2026

It's interesting to hear Daniel Kokotajlo's perspective on the "chilling effect" on internal research visibility, especially regarding AGI risks. While individual departures can accumulate, the disbanding of the superalignment team after Leike and Sutskever's exit suggests a more systemic shift. This brings up concerns about how internal research governance structures align with stated safety goals, a topic I'm actually presenting on next month at the Responsible AI Forum.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
19 November 2024

the idea of a "chilling effect" on internal research at OpenAI, as Kokotajlo mentioned, is concerning. it reminds me of how difficult it can be for novel safety considerations relevant to, say, Indic language models to gain traction if they don't align with dominant commercial or research narratives. it suggests a systemic issue beyond just one company, about how priorities are set.

Leave a Comment

Your email will not be published