Life

The AI Race to Destruction: Insiders Warn of Catastrophic Harm

AI experts, including a former OpenAI employee, warn of the potential destruction or catastrophic harm caused by AI. The concerns centre around the pursuit of AGI and the disregard for safety measures.

Published

on

TL;DR

  • Former OpenAI employee, Daniel Kokotajlo, proports AI destruction warnings. Kokotajilo warns of a 70% chance that AI will destroy or catastrophically harm humanity.
  • Kokotajlo and other AI experts assert their “right to warn” the public about the risks posed by AI.
  • OpenAI is accused of ignoring safety concerns in the pursuit of artificial general intelligence (AGI).

AI’s Doomsday Prediction: 70% Chance of Destruction

Daniel Kokotajlo, a former governance researcher at OpenAI, has issued a chilling warning. He believes that the odds of artificial intelligence (AI) either destroying or causing catastrophic harm to humankind are greater than a coin flip, specifically around 70%.

The Race to AGI: Ignoring the Risks

Kokotajlo’s concerns stem from OpenAI’s pursuit of artificial general intelligence (AGI). AGI is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond a human being.

Kokotajlo accuses OpenAI of ignoring the monumental risks posed by AGI. He claims that the company is “recklessly racing to be the first” to develop AGI, captivated by its possibilities but disregarding the potential dangers.

Asserting the ‘Right to Warn’

Kokotajlo is not alone in his concerns. He, along with other former and current employees at Google DeepMind, Anthropic, and Geoffrey Hinton, the so-called “Godfather of AI,” are asserting their “right to warn” the public about the risks posed by AI.

A Call for Safety: Urging OpenAI’s CEO

Kokotajlo’s concerns about AI’s potential harm led him to personally urge OpenAI’s CEO, Sam Altman, to “pivot to safety.” He wanted the company to spend more time implementing guardrails to reign in the technology rather than continuing to make it smarter.

Advertisement

However, Kokotajlo felt that Altman’s agreement was merely lip service. Fed up, Kokotajlo quit the firm in April, stating that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.

OpenAI’s Response: Engaging in Rigorous Debate

In response to these allegations, OpenAI stated that they are proud of their track record in providing the most capable and safest AI systems. They agree that rigorous debate is crucial and will continue to engage with governments, civil society and other communities around the world. They also highlighted their avenues for employees to express their concerns, including an anonymous integrity hotline and a Safety and Security Committee led by members of their board and safety leaders from the company.

The Future of AI: A Call to Action

The warnings from Kokotajlo and other AI experts underscore the need for a more cautious and responsible approach to AI development. As we race towards AGI, it is crucial to prioritise safety and address the potential risks posed by this powerful technology.

Comment and Share:

What are your thoughts on the warnings from AI experts about the potential destruction or catastrophic harm caused by AI? Do you think we are doing enough to prioritise safety in AI development? We’d love to hear your thoughts. Subscribe for updates on AI and AGI developments.

You may also like:

Advertisement
  • For more information on global industrial approach to AI tap here.

Trending

Exit mobile version