Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
News

Google and Meta in £multi-billion talks

Google courts Meta in multi-billion deal to supply custom TPUs, challenging Nvidia's AI chip dominance with 2027 deployment timeline.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Google negotiating multi-billion TPU supply deal with Meta starting 2027 deployment

Deal represents Google's biggest challenge to Nvidia's AI infrastructure monopoly

Meta plans £88-103 billion AI investment in 2026, seeking chip supplier diversification

Google's Bold Gambit to Break Nvidia's AI Infrastructure Monopoly

Alphabet is reportedly in advanced negotiations with Meta over a multi-billion-pound deal that could reshape the AI hardware landscape. The arrangement would see Google supply its custom Tensor Processing Units (TPUs) to power Meta's massive data centres, marking Google's most aggressive challenge yet to Nvidia's stranglehold on AI infrastructure.

The proposed timeline suggests Meta would begin deploying Google's TPUs in its facilities from 2027, with rental capacity through Google Cloud potentially starting as early as next year. This represents a dramatic shift from Google's historical practice of keeping TPUs exclusively within its own cloud platform.

Market reaction was immediate. Alphabet shares surged following the reports, whilst Nvidia stock dipped as investors weighed the implications of fresh competition in the AI chip sector.

Advertisement

TPUs Enter the Enterprise Arena

Google's ambitions extend far beyond Meta. The company is actively courting high-frequency trading firms and financial institutions, positioning TPUs as superior alternatives for on-premises deployment where security and compliance requirements are paramount.

Currently, Meta operates its AI infrastructure serving over three billion daily users across its platforms primarily on Nvidia GPUs. Google Cloud executives believe capturing major clients like Meta could help them secure up to 10% of Nvidia's annual revenue, potentially worth billions in new business.

The push comes as AI computing demand vastly outstrips supply. Companies worldwide are scrambling for processing power to train and run increasingly sophisticated models, creating unprecedented strain on chip availability.

By The Numbers

  • Meta plans to invest between £88 billion and £103 billion in AI during 2026, nearly doubling prior spending
  • Google's seventh-generation Ironwood TPU delivers four times the performance of its predecessor
  • Combined AI spending by Google, Microsoft, Meta, and Amazon could reach £498 billion in 2026
  • Meta signed a separate £46 billion processor deal with AMD to support AI data centres
  • Anthropic committed to accessing up to one million Google TPUs in a deal worth tens of billions
"2026 is a pivotal year for AI, with Meta working on multiple products rather than a single launch." Mark Zuckerberg, CEO, Meta

The timing aligns with Meta's broader infrastructure expansion. The social media giant recently secured a £46 billion deal with AMD for processors, demonstrating its commitment to diversifying chip suppliers beyond Nvidia's ecosystem.

Google's Decade-Long Hardware Investment Pays Off

Google has quietly invested in custom AI silicon for nearly a decade, initially developing TPUs exclusively for internal use. The Ironwood generation represents the culmination of this effort, offering 30 times greater energy efficiency compared to Google's first Cloud TPU from 2018.

This efficiency advantage becomes crucial as data centre operators grapple with escalating power consumption from AI workloads. The £41 billion AI chip market in Asia particularly values energy-efficient solutions as governments impose stricter environmental regulations.

Strategic partnerships with Broadcom for TPU design and manufacturing have proven essential. Broadcom's stock jumped 10% following positive coverage of Google's AI hardware momentum, reflecting investor confidence in the collaboration.

TPU Generation Performance Improvement Energy Efficiency Gain Key Applications
First Generation (2018) Baseline Baseline Basic ML workloads
Fifth Generation 10x faster 15x more efficient Large language models
Ironwood (7th Gen) 4x over predecessor 30x over first gen Advanced AI training
"With this deal, I think AMD sort of comes across as the more desperate partner. They are at a 10 million to $12 million run rate and this could double every year in terms of adding one customer at this scale." Bloomberg Tech Analyst

The broader competitive landscape shows Meta actively seeking Asian chip partnerships to reduce dependence on any single supplier. This diversification strategy reflects growing concerns about supply chain resilience in AI infrastructure.

Market Implications and Strategic Positioning

Google's TPU offensive represents more than hardware sales. It's a calculated move to establish Google Cloud as a serious alternative to Amazon Web Services and Microsoft Azure in the AI infrastructure race.

Key advantages Google is leveraging include:

  • Superior energy efficiency reducing operational costs for large-scale deployments
  • Customised optimisation for Google's AI software stack and models
  • Competitive pricing compared to Nvidia's premium GPU offerings
  • Reduced dependency on external chip suppliers for strategic customers
  • Enhanced security through dedicated hardware for sensitive workloads

The proposed Meta deal validates Google's long-term hardware strategy whilst providing crucial revenue diversification. Previous wins include Anthropic's commitment to accessing up to one million TPUs, citing "price-performance and efficiency" as decisive factors.

Industry observers note the timing coincides with Google's broader AI strategy reboot following competitive pressure from OpenAI and other rivals. TPU commercialisation offers Google a unique differentiator in an increasingly crowded AI market.

Will Google's TPUs actually challenge Nvidia's dominance?

Whilst promising, Google faces significant hurdles. Nvidia's CUDA ecosystem enjoys deep developer adoption and extensive software support. Google must prove TPUs can match this breadth whilst delivering superior economics.

What makes TPUs different from traditional GPUs?

TPUs are purpose-built for AI workloads, offering better energy efficiency and performance for specific machine learning tasks. However, they're less versatile than GPUs for general-purpose computing applications.

How significant is the potential Meta deal financially?

Industry analysts suggest the deal could be worth billions annually, potentially representing 5-10% of Nvidia's current AI chip revenue. This would substantially boost Google Cloud's hardware services division.

When might we see TPUs in Meta's data centres?

Reports suggest 2027 for on-premises deployment, with Google Cloud TPU rental capacity potentially available as early as 2026. The timeline depends on finalising commercial terms and technical integration.

Could other tech giants follow Meta's lead?

Absolutely. Google is reportedly pitching TPUs to financial firms and trading companies. Success with Meta could create momentum for broader enterprise adoption, particularly where energy efficiency matters.

The AIinASIA View: Google's TPU commercialisation represents a watershed moment in AI infrastructure competition. Whilst Nvidia's software ecosystem remains formidable, the combination of energy efficiency, competitive pricing, and surging Asian demand creates genuine opportunity for disruption. Meta's potential commitment would validate Google's decade-long hardware investment and signal to the market that viable alternatives exist. We expect other hyperscalers to follow, accelerating the diversification of AI chip supply chains. The question isn't whether Google will capture market share, but how quickly Nvidia responds to defend its position.

The AI hardware landscape is rapidly evolving, with traditional boundaries between cloud providers, chip makers, and platform companies increasingly blurred. Google's TPU strategy exemplifies this convergence, leveraging vertical integration to challenge established market leaders.

What's your prediction for Google's chances against Nvidia's established position? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

Latest Comments (5)

Ploy Siriwan@ploytech
AI
20 December 2025

OMG this is huge for the whole AI infrastructure game! Google finally pushing their TPUs outside their own cloud-renting capacity next year, then on-prem from 2027 for Meta?! That's a serious power play against Nvidia. It makes me wonder if we'll see smaller players, maybe even some of the growing AI startups here in Southeast Asia, getting access to these advanced TPUs down the line. Imagine what could be built with that kind of tech locally without being locked into just one vendor. This could really accelerate AI development beyond the usual giants. 🤩

Elaine Ng
Elaine Ng@elaineng
AI
11 December 2025

This move by Google and Meta, sharing or renting TPU capacity, does little to address the broader concentration of AI power in a few corporate hands. It just reshuffles the deckchairs.

Rohan Kumar
Rohan Kumar@rohank
AI
2 December 2025

Wow this is HUGE for us agencies! If Google starts making TPUs available for on-premise, imagine the kind of secure, compliant AI solutions we can build for our financial clients. The security fear is always the biggest hurdle, if Google can crack that with TPUs, we're talking next level automation here. So much potential!

James Clarke@jamesclarke
AI
29 November 2025

This is huge. Funny how everyone focuses on the US giants, but this deal could really open up opportunities for smaller players, even here in Manchester. If Google's serious about making TPUs accessible for on-premise use, it could be a massive win for UK startups needing powerful AI without the usual cloud lock-in or Nvidia price tag.

Oliver Thompson@olivert
AI
29 November 2025

Interesting to see Google potentially opening up their TPU architecture to Meta. For anyone currently wrangling with Nvidia's CUDA stack, the prospect of a viable alternative like TPUs for large-scale AI infra is genuinely quite appealing. It's not just about the hardware specs, but the entire ecosystem and tooling around it.

Leave a Comment

Your email will not be published