Geoffrey Hinton warns there’s up to a 20% chance AI could take control from humans. He believes big tech is ignoring safety while racing ahead for profit. Hinton says we’re playing with a “tiger cub” that could one day kill us — and most people still don’t get it.
Geoffrey Hinton's AI warning
Geoffrey Hinton, often referred to as the "Godfather of AI," has repeatedly voiced concerns about the potential existential risks posed by advanced artificial intelligence. His warnings come amidst a global surge in AI development and investment, with many companies pushing the boundaries of what AI can achieve. For more insights into the ethical considerations surrounding AI, you might be interested in discussions about ProSocial AI.
The debate around AI safety versus rapid advancement is complex. While some argue that strict regulations are needed to prevent unforeseen dangers, others believe that stifling innovation could put countries at a disadvantage in the global AI race. This tension is evident in various regions, including Southeast Asia, where there's a perceived AI's Trust Deficit. Hinton's concerns are echoed by other experts who worry about the long-term implications of increasingly autonomous systems, especially as AI capabilities expand into areas like AI agents and jobs.
His recent statements highlight a critical dilemma: how do we balance the immense potential benefits of AI, such as advancements in healthcare, climate science, and economic growth, with the serious risks it might pose? Hinton's "killer cub" analogy serves as a stark reminder that the tools we are creating could eventually become uncontrollable. This isn't just about job displacement or privacy concerns, but about the fundamental question of who or what will ultimately hold power. For a deeper understanding of the broader implications, consider exploring research on AI safety and governance from institutions like the Future of Life Institute.
Over to YOU!
Are we training the tools that will one day outgrow us — or are we just too dazzled by the profits to care?






Latest Comments (3)
i totally get hinton's point about the "killer cub" and how serious this all is. in vietnam, we're building some really cool stuff with AI for our language, you know, vietnamese has all these tones and complex grammar so it's a huge challenge. but even in our small startup, we're talking about safety. we don't want to build something amazing then realize it's out of control. it's not just about profit for us, it's about building a good tool for our community. the big tech companies should really listen to him, especially when they're pushing things out globally without thinking of the smaller languages and cultural nuances, let alone the bigger "takeover" risk he talks about.
hintons 20% number for AI taking control, that's a tough one to operationalize. from a team lead perspective, how do you even begin to factor that into project planning or risk assessments for new AI integrations? we're constantly trying to balance progress with managing the known unknowns, but a one-in-five chance of "taking control" feels like it flips the whole risk model on its head.
Hinton's "killer cub" analogy is a bit dramatic. Our research at Tsinghua, particularly with models like Qwen and DeepSeek, focuses on control and interpretability. The challenge is not some autonomous beast, but engineering robust alignment. The real risk is flawed objectives, not sentience.
Leave a Comment