Microsoft and Nvidia Pour $15 Billion into OpenAI Rival
Microsoft and Nvidia are making their biggest bet yet on Anthropic, the artificial intelligence startup behind the Claude chatbot that directly competes with OpenAI's ChatGPT. The partnership involves $15 billion in fresh investments alongside a $30 billion cloud computing commitment, marking a significant shift in the AI industry's power dynamics.
The deal structure resembles a financial merry-go-round: Anthropic commits $30 billion to use Microsoft's cloud services over multiple years, whilst Nvidia contributes up to $10 billion and Microsoft adds up to $5 billion to Anthropic's latest funding round. This circular arrangement has raised eyebrows about whether money is simply chasing itself around the AI economy.
Breaking OpenAI's Monopoly on Big Tech Backing
Both Microsoft and Nvidia have been OpenAI's primary supporters, making this diversification particularly noteworthy. Microsoft CEO Satya Nadella emphasised that OpenAI "remains a critical partner" even as the company places substantial bets on its competitor.
The timing coincides with OpenAI's own strategic shifts. The ChatGPT maker recently secured a $38 billion cloud deal with Amazon, reducing its dependence on Microsoft's infrastructure. This suggests all major players are hedging their bets as the AI race intensifies.
"This partnership isn't just about making friends; it's about reducing the AI economy's heavy reliance on just one player," said Gil Luria, senior analyst at DA Davidson. "It's smart business for Microsoft not to put all its AI eggs in one basket."
By The Numbers
- Anthropic secured $30 billion in Series G funding at a $380 billion post-money valuation, the largest venture deal of 2026
- Total funding since 2021 inception reaches nearly $64 billion across all rounds
- Run-rate revenue has reached $14 billion, with over tenfold annual growth in three years
- Claude Code revenue exceeds $2.5 billion run-rate, more than doubling since early 2026
- Clients spending over $100,000 annually surged sevenfold in the past year
The Cloud Wars Reshape AI Infrastructure
Microsoft's Azure AI Foundry customers will gain access to Claude's latest models, meaning Anthropic's technology now runs across all three major cloud providers: Microsoft, Amazon, and Google. However, Amazon remains Anthropic's primary cloud provider and training partner despite the new arrangements.
The partnership extends beyond simple cloud hosting. Anthropic commits to using one gigawatt of compute✦ power with Nvidia's cutting-edge✦ Grace Blackwell and Vera Rubin hardware, representing massive processing capabilities for AI model development.
This mirrors broader trends across Asia's tech landscape, where companies are making similar strategic investments. Singapore's sovereign wealth fund GIC co-led Anthropic's funding round, whilst Hong Kong backs new AI research institute with billions in parallel developments.
"Anthropic's thoughtful approach to AI development is changing the way enterprises operate," said Chris Emanuel, head of the technology investment group at GIC. "This represents the clear category leader in enterprise AI, demonstrating breakthrough capabilities and setting a new standard for safety, performance, and scale."
Circular Investments Signal Market Maturation
The deal's structure has prompted criticism about circular financing in the AI sector. Tech correspondent observations highlight how "Anthropic will pay Microsoft to pay Nvidia so Microsoft and Nvidia can pay Anthropic," creating a closed loop of capital flows.
These arrangements reflect the industry's enormous capital requirements. OpenAI CEO Sam Altman previously mentioned needing $1.4 trillion for 30 gigawatts of computing power, illustrating the scale of ambition driving these investments.
The following table shows how major partnerships are reshaping AI infrastructure:
| Company | Primary Cloud Partner | Secondary Partners | Recent Funding |
|---|---|---|---|
| Anthropic | Amazon | Microsoft, Google | $30 billion (2026) |
| OpenAI | Microsoft | Amazon | $40 billion (2025) |
| Cohere | Oracle, AWS | $500 million (2024) |
Regional Expansion Accelerates Competition
Anthropic is opening its fourth Asia-Pacific office in Sydney, expanding regional infrastructure alongside existing operations. This follows similar moves by competitors as they recognise Asia's growing importance in AI adoption and development.
The expansion comes as Indian enterprises go all in on AI investment, whilst Singapore invests more than S$1 billion in AI research over five years. Regional governments and enterprises are driving substantial demand for AI capabilities.
Key developments shaping the regional landscape include:
- Enterprise adoption accelerating with Fortune 500 companies becoming major Claude clients
- Developer tools like Claude Code gaining traction with technical users across Asia
- Safety-focused AI development resonating with regulatory environments in key markets
- Multi-cloud strategies reducing vendor lock-in risks for enterprise customers
- Sovereign wealth funds taking strategic positions in leading AI companies
The partnership also highlights growing concerns about Big Tech backing Anthropic after Pentagon blacklist developments, as geopolitical considerations increasingly influence AI investment decisions.
Why are Microsoft and Nvidia investing in OpenAI's competitor?
Diversification reduces risk and creates competition in the AI market. Both companies want to avoid over-dependence on a single AI provider whilst fostering innovation through competitive pressure.
How does the $30 billion cloud commitment work?
Anthropic agrees to spend $30 billion on Microsoft's Azure services over several years, providing Microsoft with guaranteed revenue whilst securing Anthropic's computing infrastructure needs.
Will this affect OpenAI's relationship with Microsoft?
Microsoft maintains OpenAI as a "critical partner" whilst diversifying its AI portfolio. The relationships aren't mutually exclusive, allowing Microsoft to benefit from multiple AI providers.
What does this mean for Anthropic's independence?
Despite major investments from tech giants, Anthropic maintains its primary partnership with Amazon and operates across multiple cloud providers, preserving strategic flexibility and avoiding single-vendor dependence.
How significant is GIC's involvement in the funding round?
Singapore's sovereign wealth fund co-leading the round signals strong institutional confidence in Anthropic's enterprise AI strategy and demonstrates growing Asian interest in AI leadership positions.
The implications extend beyond simple financial arrangements. As companies like Anthropic develop simpler AI approaches rather than complex agent systems, these partnerships could reshape how enterprises deploy artificial intelligence across the region.
What do you think about this circular investment pattern in the AI industry? Are we witnessing healthy diversification or just financial engineering? Drop your take in the comments below.







Latest Comments (4)
It does feel a bit like musical chairs with these big tech firms, doesn't it? The sheer scale of that $1.4 trillion compute ambition from Altman really puts into perspective why these alliances are shifting. Diversifying suppliers for that kind of infrastructure spend is just pragmatic. You wouldn't want to be locked into a single vendor for something so critical.
This whole merry-go-round thing with Microsoft and Nvidia splitting their bets between Anthropic and OpenAI makes sense from an infra perspective. Sam Altman talking about needing $1.4 trillion for 30 gigawatts of computing power, that's wild. Even if it's partly PR, it highlights the insane scale we're heading towards. For us, managing cloud spend for even smaller AI models is already a headache. Imagine trying to provision and manage that kind of GPU cluster. The operational overhead alone would be… a lot. These big tech players are basically building out mini-grids just for AI, and spreading risk across providers ensures they always have someone to buy capacity from.
The bit about OpenAI needing $1.4 trillion for 30 gigawatts of computing power really got me thinking. Is that purely for their own internal models or are they estimating for future demand from external users too? Seems like an insane number even with future scaling in mind.
The idea of "spreading risk" through diversification makes sense, but it doesn't inherently address the potential for emergent biases across these new models. More players doesn't automatically mean more ethical outcomes.
Leave a Comment