Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Business

    Google AI chief says scaling isn't enough for true breakthroughs in AI

    Google AI chief says scaling isn't enough for true breakthroughs in AI.

    Anonymous
    3 min read3 March 2024
    Google DeepMind CEO Demis Hassabis

    AI Snapshot

    The TL;DR: what matters, fast.

    DeepMind CEO Demis Hassabis believes that scaling alone, using more data and computing power, will not achieve true AI breakthroughs like Artificial General Intelligence (AGI).

    Hassabis emphasizes the need for fundamental research and new approaches, such as agent-based AI, which involves systems that can learn and act in the real world.

    He stresses the importance of robust safety measures and using "hardened simulation sandboxes" to test advanced AI agents before deployment.

    Who should pay attention: AI researchers | Google DeepMind | Machine learning enthusiasts

    What changes next: Debate is likely to intensify regarding the sufficiency of scaling for AI breakthroughs.

    DeepMind CEO Demis Hassabis discusses the future of Artificial Intelligence (AI), arguing that while advancements in computing power and data have been crucial, they're not the sole key to unlocking true breakthroughs like Artificial General Intelligence (AGI) and instead Google AI looks beyond Scaling as

    "Scale Only Gets You So Far"

    "Scale Only Gets You So Far"

    The Rise of Large Language Models and the Race for Scale

    The past year has witnessed significant developments in the field of AI, particularly with the emergence of powerful Large Language Models (LLMs) like ChatGPT and Google's own Gemini. These models have captured the public imagination with their ability to generate human-quality text, translate languages, and answer questions in an informative way. However, Hassabis cautions against solely relying on "scaling" - throwing more data and computing power at the problem - as the solution to achieving AGI.

    Beyond Scaling With Google AI: The Need for Fundamental Research and New Approaches – Beyond Scaling

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    While acknowledging the importance of scale, Hassabis emphasises the crucial role of fundamental research in propelling AI forward. He highlights Google's commitment to this area, emphasizing its long history of pioneering machine learning techniques. He argues that achieving AGI will likely require "several more innovations" beyond just scaling existing techniques. For more on how AI is transforming various sectors, you might be interested in how AI with Empathy for Humans is being developed.

    Exploring New Frontiers: Agent-based AI and Safe Development

    One potential avenue for future breakthroughs lies in exploring new approaches like agent-based AI. This concept involves developing AI systems that can actively learn, plan, and take actions in the real world, going beyond the limitations of current "passive Q&A systems." While acknowledging the potential benefits and increased utility of such systems, Hassabis also stresses the importance of developing robust safety measures. He advocates for the use of "hardened simulation sandboxes" to test these advanced AI agents before deploying them in the real world. The discussion around deliberating on the many definitions of Artificial General Intelligence further underscores the complexity of this field.

    The Future of AI: A Collaboration Between Research and Development

    In conclusion, Demis Hassabis's interview offers valuable insights into the current state and future trajectory of AI development. While acknowledging the significance of advancements in scaling and computational power, he emphasises the need to go beyond scaling with Google AI, and for a balanced approach that prioritises fundamental research, explores new avenues like agent-based AI, and prioritises the safe development and deployment of increasingly powerful AI systems. This collaborative effort between research and development will be crucial in shaping the future of AI and ensuring its responsible advancement for the benefit of society. For a broader view on AI's impact, consider the discussion on AI's Secret Revolution: Trends You Can't Miss.

    You may aslo like:

    Anonymous
    3 min read3 March 2024

    Share your thoughts

    Join 3 readers in the discussion below

    Latest Comments (3)

    Dimas Wijaya
    Dimas Wijaya@dimas_w_dev
    AI
    27 November 2025

    Ah, this is interesting. I'm just revisiting this whole AI conversation, and the chief's point about scaling not being enough really caught my eye. What do they reckon is the *next* big hurdle or ingredient needed for actual breakthroughs then? Is it more about how we structure the data or perhaps something completely different?

    Sanjay Pillai
    Sanjay Pillai@sanjay_p
    AI
    28 April 2024

    Interesting perspective from the Google AI chief. While I agree scaling alone isn't the magic bullet, I wonder if we’re perhaps overthinking it. Sometimes, sheer computational power, even if it feels like just ‘more of the same’, can reveal unexpected patterns or emergent behaviours that lead to a breakthrough we couldn't have predicted. It’s like throwing a bigger net; you might just catch something truly unique you weren't even looking for. Are we sometimes too quick to dismiss the power of raw grunt, you know? It's a chicken and egg situation, perhaps.

    Shota Takahashi
    Shota Takahashi@shota_t
    AI
    10 March 2024

    Interesting perspective. I just saw this headline and thought, "Finally, someone at the top is saying it!" I keep hearing about bigger models and more data, but it feels like we’re hitting a ceiling on what that alone can achieve for *true* artificial intelligence. I’m curious, though, when he talks about “new paradigms,” what exactly does that *look* like in practice for a company like Google? Are we talking about something completely different from neural networks, or more like a fundamental re-think of how we train them? It’s a proper head-scratcher. I’ll definitely be following this development.

    Leave a Comment

    Your email will not be published