Business

AI Safety Experts Flee OpenAI: Is AGI Around the Corner?

Explore the recent exodus of AI safety researchers from OpenAI and its implications for the future of AI Safety Research.

Published

on

TL;DR:

  • About half of OpenAI’s AGI safety researchers have left the company.
  • Former researcher Daniel Kokotajlo believes OpenAI is close to AGI but unprepared for its risks.
  • Departing experts remain committed to AI, moving to competitors or starting new ventures.

Artificial Intelligence (AI) is growing fast in Asia and around the world. One big name in AI is OpenAI. But recently, something surprising happened. Many of the company’s safety researchers left. Why did they leave? And what does this mean for the future of AI?

A Surprising Exodus

About half of OpenAI’s AGI/ASI safety researchers have left the company. Daniel Kokotajlo, a former OpenAI safety researcher, talked to Fortune magazine about this. He said that these researchers left because OpenAI is “fairly close” to developing artificial general intelligence (AGI) but isn’t ready to handle the risks that come with it.

Kokotajlo didn’t give specific reasons for all the resignations. But he believes they align with his own views. He said there’s a “chilling effect” on publishing research about AGI risks within the company. He also noted that the communications and lobbying wings of OpenAI have more influence on what’s appropriate to publish.

Key Departures:

  • Jan Hendrik Kirchner
  • Collin Burns
  • Jeffrey Wu
  • Jonathan Uesato
  • Steven Bills
  • Yuri Burda
  • Todor Markov
  • John Schulman (OpenAI co-founder)
  • Ilya Sutskever (Chief scientist)
  • Jan Leike (Superalignment team leader)

Why Did They Leave?

Kokotajlo said these departures weren’t a “coordinated thing” but rather people “individually giving up.” The resignations of Ilya Sutskever and Jan Leike were particularly significant. They jointly led the company’s “superalignment” team focused on future AI system safety. OpenAI disbanded this team after their departure.

“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.” – Letter to Governor Newsom

Where Did They Go?

The departing researchers didn’t leave AI altogether. They just left OpenAI. Leike and Schulman moved to safety research roles at OpenAI competitor Anthropic. Before leaving OpenAI, Schulman said he believed AGI would be possible in two to three years. Sutskever founded his own startup to develop safe superintelligent AI.

This suggests that these experts still see potential in AI technology. They just no longer view OpenAI as the right place to pursue it.

Advertisement

OpenAI and Regulation

Kokotajlo expressed disappointment, but not surprise, that OpenAI opposed California’s SB 1047 bill. This bill aims to regulate advanced AI system risks. He co-signed a letter to Governor Newsom criticizing OpenAI’s stance. The letter called it a betrayal of the company’s original plans to thoroughly assess AGI’s long-term risks for developing regulations and laws.

The Future of AI Safety

So, what does this all mean for the future of AI safety? It’s clear that there are differing opinions on how to manage AGI risks. But it’s also clear that many experts are committed to ensuring AI is developed safely and responsibly. AI is a fast-changing field. It’s important to stay informed about the latest developments.

Comment and Share:

What do you think about the future of AI safety? Do you believe AGI is around the corner? Share your thoughts in the comments below. And don’t forget to subscribe for updates on AI and AGI developments.

You may also like:

  • To stay informed on AI safety, tap here.

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version