Skip to main content
AI in ASIA
Google DeepMind CEO Demis Hassabis
Business

Google AI chief says scaling isn't enough for true breakthroughs in AI

Google's AI chief challenges Silicon Valley's obsession with scaling, arguing that bigger models won't deliver AGI breakthroughs.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

DeepMind CEO argues scaling large language models yields diminishing returns for AGI

Training costs for frontier models now exceed $100 million while achieving 23% human cognitive performance

Google pivots toward agent-based AI systems that can plan and take autonomous actions

Advertisement

Advertisement

Google's AI Chief Challenges the Scale-First Mentality

DeepMind CEO Demis Hassabis is pushing back against Silicon Valley's obsession with bigger models and more compute power. In recent discussions, he's argued that whilst scaling has driven remarkable progress in AI, it won't deliver the transformative breakthroughs needed for artificial general intelligence.

The position marks a subtle but significant departure from the industry's current trajectory. As competitors race to build ever-larger language models, Google's approach suggests the tech giant sees fundamental limits to the scale-first strategy.

Why Scaling Has Hit a Wall

The past two years have witnessed extraordinary developments in large language models. ChatGPT demonstrated what's possible when you combine massive datasets with unprecedented computational resources. Google's own Gemini followed suit, showcasing similar capabilities across text, code, and multimodal tasks.

Yet Hassabis argues this approach has natural boundaries. Simply adding more parameters or training data yields diminishing returns without addressing core architectural limitations. The AI Takes on Math: Google's Breakthroughs in Reasoning represents one attempt to move beyond pure scale towards more sophisticated problem-solving approaches.

Current models remain fundamentally reactive systems. They respond to prompts but can't actively plan, learn from experience, or adapt their behaviour based on long-term goals.

By The Numbers

  • GPT-4 contains approximately 1.8 trillion parameters compared to GPT-3's 175 billion
  • Training costs for frontier models now exceed $100 million per model
  • Google's AI research division employs over 2,000 researchers globally
  • Current language models achieve human-level performance on only 23% of cognitive tasks
  • Energy consumption for training large models has increased 300,000x since 2012
"We've seen tremendous progress through scaling, but I think we're approaching some fundamental limits. The next breakthroughs will require new architectures and approaches, not just bigger models."
Demis Hassabis, CEO, DeepMind

The Agent-Based Alternative

Google's research increasingly focuses on agent-based AI systems that can take autonomous actions. Unlike today's chatbots, these systems would actively pursue goals, make plans, and learn from their interactions with the environment.

This shift aligns with broader industry trends. The Google: 5 AI Agents to Transform Work by 2026 initiative demonstrates how the company envisions AI moving beyond passive question-answering towards active problem-solving.

Agent-based systems present unique challenges. They require robust safety mechanisms, reliable reasoning capabilities, and the ability to operate in complex, unpredictable environments. Current scaling approaches don't adequately address these requirements.

The following research priorities highlight Google's agent-focused strategy:

  • Reinforcement learning systems that improve through environmental feedback
  • Multi-agent architectures where AI systems collaborate on complex tasks
  • Safety frameworks for testing autonomous AI in controlled environments
  • Long-term memory systems that enable persistent learning and adaptation
  • Integration layers connecting AI agents with real-world tools and systems

Safety Concerns Drive Research Direction

Hassabis emphasises the critical importance of developing safety measures alongside capability improvements. As AI systems become more autonomous, the potential for unintended consequences increases dramatically.

Google advocates for "hardened simulation sandboxes" where advanced AI agents can be thoroughly tested before real-world deployment. This approach reflects lessons learned from previous AI safety incidents and regulatory pressure from governments worldwide.

"We can't just build more powerful systems and hope they remain safe. Safety has to be designed in from the ground up, especially as we move towards more autonomous agents."
Demis Hassabis, CEO, DeepMind

The safety-first approach influences research priorities and product development timelines. Google's measured rollout of AI features contrasts sharply with more aggressive competitors, reflecting this philosophical commitment.

Approach Current Focus Key Limitations Future Potential
Scale-First Larger models, more data Diminishing returns, energy costs Incremental improvements only
Agent-Based Autonomous reasoning, planning Safety concerns, complexity Transformative capabilities
Hybrid Combining scale with new architectures Technical complexity, coordination Balanced progress and safety

Asia-Pacific Market Implications

Google's strategic shift has significant implications for Asian markets. The region's tech giants have largely followed the scaling playbook, investing heavily in larger models and computational infrastructure.

However, the Google Boss: AI Boom has 'Irrationality' suggests a more measured approach may prove sustainable long-term. Companies focusing on practical applications rather than pure scale could find competitive advantages.

Regional AI development increasingly emphasises local language capabilities and cultural adaptation. Google's focus on fundamental research rather than pure scaling could enable more nuanced, locally relevant AI systems.

The shift towards agent-based AI also creates opportunities for Asian companies specialising in robotics, manufacturing automation, and smart city technologies. These applications require the kind of autonomous, goal-directed behaviour that pure language models can't provide.

What does this mean for AI development timelines?

Hassabis suggests achieving artificial general intelligence will require "several more innovations" beyond scaling. This implies AGI development may take longer than current industry projections suggest, but the resulting systems may be more robust and capable.

How does Google's approach differ from competitors?

While companies like OpenAI and Anthropic continue scaling existing architectures, Google increasingly invests in fundamental research exploring new approaches like agent-based systems, multi-modal reasoning, and safety frameworks.

What are the practical implications for businesses?

Companies should prepare for AI systems that can take autonomous actions rather than just respond to queries. This requires rethinking workflow design, safety protocols, and human-AI collaboration models across industries.

Why is safety becoming such a priority?

As AI systems become more autonomous and capable, the potential for unintended consequences grows exponentially. Google's emphasis on safety reflects both technical necessity and regulatory pressure from governments worldwide.

Will this slow down AI progress?

Hassabis argues the opposite: focusing on fundamental research rather than pure scaling will ultimately accelerate meaningful progress by addressing core limitations of current approaches rather than simply making them bigger.

The AIinASIA View: Hassabis is spot on. The industry's fixation on parameter counts and training costs has created impressive demos but limited practical value. We're seeing diminishing returns from scaling, whilst energy consumption and development costs spiral upward. Google's pivot towards agent-based systems and fundamental research represents a more sustainable path forward. However, execution remains critical. The company must deliver concrete results to validate this approach, particularly as competitors continue pushing scale-first strategies that generate impressive headlines. The real test will be whether Google can translate this research focus into products that meaningfully outperform scale-optimised alternatives.

The implications of this strategic shift extend far beyond Google's research labs. As the industry grapples with the limitations of current approaches, companies that successfully navigate beyond pure scaling may establish lasting competitive advantages.

What aspects of this shift towards agent-based AI do you find most compelling or concerning? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (2)

Rizky Pratama
Rizky Pratama@rizky.p
AI
14 April 2024

totally agree with Hassabis on agent-based AI. for Tokopedia, we've seen how rule-based agents can improve customer service flows. imagine if they could actively learn and adapt to user behavior in real-time, especially with our unique market dynamics. way more useful than just bigger LLMs.

Maggie Chan
Maggie Chan@maggiec
AI
17 March 2024

Demis is right, scaling LLMs only gets you so far. We've seen it with our compliance tech - throwing more data at it helps, sure, but the real breakthroughs on understanding nuanced HK legal speak, or bridging the gap with mainland regs, that comes from totally different, almost agent-based approaches we're still figuring out. It's not just about bigger models.

Leave a Comment

Your email will not be published