Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    AI Safety Research
    Business

    AI Safety Experts Flee OpenAI: Is AGI Around the Corner?

    Explore the recent exodus of AI safety researchers from OpenAI and its implications for the future of AI Safety Research.

    Anonymous3 September 20243 min read

    AI Snapshot

    The TL;DR: what matters, fast.

    OpenAI experienced an exodus of safety researchers concerned about the company's preparedness for artificial general intelligence (AGI).

    Prominent researchers, like Ilya Sutskever and Jan Leike, departed due to a lack of trust in OpenAI's commitment to safe and responsible AI development.

    These former OpenAI researchers continue to pursue AI safety at other organizations or through new ventures, indicating ongoing concerns within the field.

    Who should pay attention: AI safety researchers | Policymakers | AI company executives

    What changes next: Debate is likely to intensify regarding AI safety protocols and AGI development timelines.

    About half of OpenAI's AGI safety researchers have left the company.,Former researcher Daniel Kokotajlo believes OpenAI is close to AGI but unprepared for its risks.,Departing experts remain committed to AI, moving to competitors or starting new ventures.

    Artificial Intelligence (AI) is growing fast in Asia and around the world. One big name in AI is OpenAI. But recently, something surprising happened. Many of the company's safety researchers left. Why did they leave? And what does this mean for the future of AI?

    A Surprising Exodus

    About half of OpenAI's AGI/ASI safety researchers have left the company. Daniel Kokotajlo, a former OpenAI safety researcher, talked to Fortune magazine about this. He said that these researchers left because OpenAI is "fairly close" to developing artificial general intelligence (AGI) but isn't ready to handle the risks that come with it. You can read more about the definitions of Artificial General Intelligence in our detailed article.

    Kokotajlo didn't give specific reasons for all the resignations. But he believes they align with his own views. He said there's a "chilling effect" on publishing research about AGI risks within the company. He also noted that the communications and lobbying wings of OpenAI have more influence on what's appropriate to publish.

    Key Departures:

    Jan Hendrik Kirchner,Collin Burns,Jeffrey Wu,Jonathan Uesato,Steven Bills,Yuri Burda,Todor Markov,John Schulman (OpenAI co-founder),Ilya Sutskever (Chief scientist),Jan Leike (Superalignment team leader)

    Why Did They Leave?

    Kokotajlo said these departures weren't a "coordinated thing" but rather people "individually giving up." The resignations of Ilya Sutskever and Jan Leike were particularly significant. They jointly led the company's "superalignment" team focused on future AI system safety. OpenAI disbanded this team after their departure. This highlights a growing concern about Why ProSocial AI Is The New ESG.

    "We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." - Letter to Governor Newsom

    "We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." - Letter to Governor Newsom

    Where Did They Go?

    The departing researchers didn't leave AI altogether. They just left OpenAI. Leike and Schulman moved to safety research roles at OpenAI competitor Anthropic. Before leaving OpenAI, Schulman said he believed AGI would be possible in two to three years. Sutskever founded his own startup to develop safe superintelligent AI.

    This suggests that these experts still see potential in AI technology. They just no longer view OpenAI as the right place to pursue it.

    OpenAI and Regulation

    Kokotajlo expressed disappointment, but not surprise, that OpenAI opposed California's SB 1047 bill. This bill aims to regulate advanced AI system risks. He co-signed a letter to Governor Newsom criticizing OpenAI's stance. The letter called it a betrayal of the company's original plans to thoroughly assess AGI's long-term risks for developing regulations and laws. This mirrors discussions seen in Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means. For more on global AI governance, consider reviewing the OECD AI Principles.

    The Future of AI Safety

    So, what does this all mean for the future of AI safety? It's clear that there are differing opinions on how to manage AGI risks. But it's also clear that many experts are committed to ensuring AI is developed safely and responsibly. AI is a fast-changing field. It's important to stay informed about the latest developments.

    Comment and Share:

    What do you think about the future of AI safety? Do you believe AGI is around the corner? Share your thoughts in the comments below. And don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

    What did you think?

    Written by

    Share your thoughts

    Join 3 readers in the discussion below

    This is a developing story

    We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

    Latest Comments (3)

    Elaine Ng
    Elaine Ng@elaine_n_ai
    AI
    10 November 2025

    Funny how we're still talking AGI now, after so many safety folks bailed. Makes one wonder about the actual progress.

    Meera Reddy
    Meera Reddy@meera_r_ai
    AI
    15 October 2024

    This is fascinating, I'm just catching up on all this OpenAI news. With so many safety experts leaving, especially after all the discussions about AGI, does it make you wonder if they felt their concerns weren't being adequately addressed internally, or if they genuinely believe the dangers are more imminent than the company is letting on? It’s a bit worrying, no?

    Arjun Patel@arjun_p_dev
    AI
    8 October 2024

    This article, quite the eye-opener, really makes one wonder, doesn't it? As an Indian reader following this space casually, the brain drain at OpenAI is a proper concern. It absolutely tracks with the anxieties about AGI development outpacing safety protocols. Definitely bookmarking this to ponder further.

    Leave a Comment

    Your email will not be published