Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
News

India's IndiaAI Kosh: 38,000 GPUs at ₹100/hour — The DPI Approach to AI Compute

India deploys 38,000 GPUs at ₹67/hour through IndiaAI Kosh. Phase 2 adds 20,000 more as AI compute becomes public infrastructure.

Intelligence DeskIntelligence Desk5 min read

India's IndiaAI Kosh: 38,000 GPUs at ₹100/hour - The DPI Approach to AI Compute

India is applying its digital public infrastructure playbook to artificial intelligence. The IndiaAI Mission, backed by ₹10,300 crore in government funding, has deployed 38,000 GPUs nationwide through its compute platform, with pricing starting at ₹67 per hour for NVIDIA H100 access. Phase 2, announced at the India AI Impact Summit in March 2026, will add another 20,000 GPUs within six months.

The strategy mirrors the approach that made Unified Payments Interface (UPI) and Aadhaar global benchmarks: government sets the rails, private enterprises compete on top. IndiaAI Kosh, the national AI compute and data repository, is designed to ensure startups and researchers can train models without burning through venture capital on cloud bills.

Who's Building the Infrastructure?

The compute backbone is a public-private partnership spanning multiple providers:

Advertisement

  • Yotta Data Services leads with 9,216 GPUs, including 8,192 H100s, with plans for 20,736 NVIDIA Blackwell Ultra GPUs by August 2026
  • E2E Networks deploys Blackwell clusters at L&T Vyoma Data Centre in Chennai
  • Jio Platforms contributes 208 H200 and 104 AMD MI300X GPUs
  • AWS-managed partners provide 1,200 lower-tier GPUs for lighter workloads

The IndiaAI Compute Portal, launching imminently, will let startups, ministries, state governments, and researchers request capacity through a unified platform. Eligible users receive up to 40% subsidies on compute costs.

India is no longer waiting for permission to build AI. We are building the infrastructure so the next generation of Indian AI companies can scale without dependence on foreign cloud providers.

Abhishek Singh, CEO, Digital India Corporation

Early adopters demonstrate the model's potential. Sarvam.ai trained its Sarvam-3 multilingual LLM on Yotta's H100 cluster, then open-sourced models covering 22 Indic languages. The company's success shows that sovereign AI capacity does not mean isolation from the global ecosystem.

By The Numbers

  • 38,000 GPUs deployed as of April 2026, with 20,000 more planned within six months
  • ₹67-100 per GPU hour for H100 access, roughly 33% below global cloud rates
  • ₹10,300 crore (~USD 1.2 billion) total IndiaAI Mission investment through 2030
  • 1,000+ machine-readable datasets available through AI Kosh repository
  • 40% subsidy on compute costs for eligible Indian startups and researchers
ProviderGPUs DeployedKey Hardware
Yotta Data Services9,216NVIDIA H100, Blackwell Ultra (planned)
E2E NetworksTBCNVIDIA Blackwell clusters
Jio Platforms312H200, AMD MI300X
AWS Partners1,200Lower-tier GPU instances
Others (Phase 2)20,000+Mix of H100/Blackwell

Why This Matters for Asia

India's DPI approach to AI compute carries implications beyond its borders. If the model succeeds, it offers a template for countries across South and Southeast Asia that lack the capital for independent AI infrastructure. The combination of subsidised compute, open data repositories, and government coordination addresses three barriers simultaneously: cost, data access, and coordination failure.

The risk is that subsidies drive consumption without proportional innovation. Phase 2 will test whether affordable GPUs translate into competitive AI products or simply cheaper experimentation. DeepSeek's open-source models have already shown that raw compute is necessary but insufficient; quality training data and engineering talent matter as much as hardware access.

The DPI model works when you combine infrastructure with ecosystem development. GPUs alone do not produce AI companies.

Dr Karuna Nain, AI Policy Researcher, Observer Research Foundation
The AIinASIA View: India is executing the playbook that made UPI a global model. By treating AI compute as digital public infrastructure, accessible, affordable, and standardised, the country is creating conditions that neither US cloud giants nor Chinese competitors can easily replicate. The real test comes in the next six months: can Phase 2 convert subsidised GPU hours into genuinely competitive AI products, or will cheap compute simply subsidise inefficiency? We are cautiously optimistic, but the answer hinges on data quality and talent retention as much as hardware.

Frequently Asked Questions

How does IndiaAI Kosh pricing compare to global cloud rates?

AWS charges approximately USD 1.20 per GPU hour for on-demand H100 instances. India's ₹67 rate (under USD 0.80) represents roughly a 33% discount, significant for startups training large language models where compute costs can exceed millions of dollars.

Can overseas companies access IndiaAI Kosh?

Policy remains focused on Indian startups and researchers. Multinational companies with R&D centres in India may gain access under bilateral arrangements, but the primary beneficiaries are domestic AI developers.

What datasets are available through AI Kosh?

The repository hosts over 1,000 machine-readable datasets spanning healthcare, agriculture, governance, and financial technology. The education sector is a particularly active contributor, with datasets supporting Indic language model development.

Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Global AI Policy Landscape learning path.

Continue the path →
Loading comments...