Life

Geoffrey Hinton’s AI Wake-Up Call — Are We Raising a Killer Cub?

The “Godfather of AI” Geoffrey Hinton’s warning that humans may lose control — and slams tech giants for downplaying the risks.

Published

on

TL;DR — What You Need to Know

  • Geoffrey Hinton warns there’s up to a 20% chance AI could take control from humans.
  • He believes big tech is ignoring safety while racing ahead for profit.
  • Hinton says we’re playing with a “tiger cub” that could one day kill us — and most people still don’t get it.

Geoffrey Hinton’s AI warning

Geoffrey Hinton doesn’t do clickbait. But when the “Godfather of AI” says we might lose control of artificial intelligence, it’s worth sitting up and listening.

Hinton, who helped lay the groundwork for modern neural networks back in the 1980s, has always been something of a reluctant prophet. He’s no AI doomer by default — in fact, he sees huge promise in using AI to improve medicine, education, and even tackle climate change. But recently, he’s been sounding more like a man trying to shout over the noise of unchecked hype.

And he’s not mincing his words:

“We are like somebody who has this really cute tiger cub,” he told CBS. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”
Geoffrey Hinton’, the “Godfather of AI”
Tweet

That tiger cub? It’s AI.
And Hinton estimates there’s a 10–20% chance that AI systems could eventually wrest control from human hands. Think about that. If your doctor said you had a 20% chance of a fatal allergic reaction, would you still eat the peanuts?

What really stings is Hinton’s frustration with the companies leading the AI charge — including Google, where he used to work. He’s accused big players of lobbying for less regulation, not more, all while paying lip service to the idea of safety.

Advertisement
“There’s hardly any regulation as it is,” he says. “But they want less.”
Geoffrey Hinton’, the “Godfather of AI”
Tweet

It gets worse. According to Hinton, companies should be putting about a third of their computing power into safety research. Right now, it’s barely a fraction. When CBS asked OpenAI, Google, and X.AI how much compute they actually allocate to safety, none of them answered the question. Big on ambition, light on transparency.

This raises real questions about who gets to steer the AI ship — and whether anyone is even holding the wheel.

Hinton isn’t alone in his concerns. Tech leaders from Sundar Pichai to Elon Musk have all raised red flags about AI’s trajectory. But here’s the uncomfortable truth: while the industry is good at talking safety, it’s not so great at building it into its business model.

So the man who once helped AI learn to finish your sentence is now trying to finish his own — warning the world before it’s too late.

Over to YOU!

Are we training the tools that will one day outgrow us — or are we just too dazzled by the profits to care?

Advertisement

You may also like:

Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version