Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
South Asia

Sarvam-105B Is India's First Competitive Open-Source LLM, and the Enterprise Roadmap Just Became Real

Sarvam-105B matches frontier reasoning benchmarks while Sarvam-30B delivers 3x throughput on L40S. India's open-source AI moment has arrived.

Intelligence DeskIntelligence Deskโ€ขโ€ข5 min read

Sarvam-105B Is India's First Competitive Open-Source LLM, and the Enterprise Roadmap Just Became Real

India now has a domestic open-source large language model that matches frontier reasoning benchmarks. Sarvam AI released Sarvam-30B and Sarvam-105B on 6 March 2026, trained entirely in India on compute provided under the IndiaAI Mission. Weights are downloadable from AI Kosh and Hugging Face, API access is live, and both models are already powering Sarvam's own production products. For Indian banks, telcos, and public-sector buyers that have been waiting for a credible local alternative to Western frontier models, the wait is over.

The quantitative case for Sarvam-105B is stronger than many international observers expected. It is not just "good for an Indian model". It is competitive with the top tier on reasoning and maths, and sets state-of-the-art on Indian language benchmarks. For Asia more broadly, it is a demonstration that sovereign AI models are reaching commercial viability rather than staying as policy signalling.

The Benchmark Numbers Are What Matter

Sarvam-105B achieves 98.6 on Math500, matching top frontier models. It scores 71.7 on LiveCodeBench v6, outperforming most competitors on coding tasks. On knowledge, Sarvam-105B hits 90.6 on MMLU and 81.7 on MMLU Pro, putting it in the frontier-class band. Instruction-following sits at 84.8 on IF Eval.

Advertisement

Sarvam-30B is the more commercially interesting model. It achieves 88.3 Pass@1 on AIME 25 (rising to 96.7 with tool use) and 66.5 on GPQA Diamond, while using only 2.4 billion active parameters and delivering 1.5x to 3x throughput improvements on mid-tier accelerators like Nvidia L40S compared with similar models at 28K input / 4K output sequence lengths.

Both Sarvam-30B and 105B achieve state-of-the-art results on Indian language benchmarks, outperforming significantly larger models.

Sarvam AI model release notes, 6 March 2026

The implication for Indian enterprises is concrete. A bank, hospital, or ride-hailing platform running Sarvam-30B on an L40S cluster can match or beat the quality of frontier-model API calls on the workloads that drive most of their inference volume, while keeping data inside India's sovereign compute footprint.

Sarvam-105B Is India's First Competitive Open-Source LLM, and the Enterprise Roadmap Just Became Real

The IndiaAI Mission Just Validated Its Spending Thesis

The IndiaAI Mission launched in March 2024 with a budget of โ‚น10,370 crore (roughly $1.25 billion) earmarked for compute, model development, skilling, and data platforms. Sarvam-30B and 105B are the first major open-source deliverables trained on mission-funded compute, which changes the political economy of Indian AI.

Before 6 March, the Mission's supporters had to argue that sovereign compute would one day produce competitive models. After 6 March, they can point to benchmark numbers. That matters for the next phase of Mission funding, which is expected to expand through 2026 and 2027.

By The Numbers

  • 98.6: Sarvam-105B score on Math500, matching frontier-class frontier models.
  • 90.6: Sarvam-105B score on MMLU, and 81.7 on MMLU Pro.
  • 2.4B: Active parameters in Sarvam-30B, delivering 1.5x-3x throughput on L40S accelerators.
  • 88.3 (Pass@1): Sarvam-30B on AIME 25 benchmark, rising to 96.7 with tool use.
  • โ‚น10,370 crore: IndiaAI Mission budget backing domestic compute, roughly $1.25 billion.

Samvaad and Indus: Production Products, Not Demos

Sarvam-30B powers Samvaad, Sarvam's conversational AI platform, while Sarvam-105B powers Indus, an AI assistant targeting complex reasoning and agentic workflows. Both products are live and shipping to enterprise customers.

That detail matters more than the benchmarks. Indian AI has had credible research labs for a decade, but the gap from research paper to production product has consistently been where Indian AI loses to Chinese and US competitors. Sarvam shipping two production products on release day signals a different operating model: vertically integrated from model training to application layer, optimised for enterprise deployment rather than consumer benchmarks.

ModelParameters (Active)Best-In-Class BenchmarksProduction Use CaseAvailability
Sarvam-30B2.4B activeAIME 25: 88.3 (96.7 with tools)Samvaad conversational AIAI Kosh, Hugging Face, API
Sarvam-105BDenseMath500: 98.6, MMLU: 90.6Indus reasoning and agentic workflowsAI Kosh, Hugging Face, API
IndiaAI Mission compute-GPU capacityTraining infrastructureGovernment-provided

What This Means for India's Regional AI Position

Indian enterprises, government agencies, and education institutions can now specify a credible domestic LLM in procurement. That was not true even three months ago.

The implications for ASEAN, the Gulf, and African markets are worth tracking. Sarvam's open weights, combined with strong Indian-language performance, make the models attractive for any market with linguistic overlap with South Asian languages. Tamil, Bengali, and Hindi-speaking diaspora markets across Singapore, Malaysia, and the UAE are obvious initial deployment contexts.

The Gulf Cooperation Council's sovereign AI agenda, which we have covered in earlier reporting, may be most strongly influenced by Sarvam's template: Indian training infrastructure, Indian-language specialisation, and an open-source release strategy. That combination gives customer markets that cannot host the training a clear, governable path to deployment.

Sarvam-105B is described as the first competitive Indian open-source LLM.

Hacker News discussion, 6 March 2026

For context on the broader India AI economy, see our coverage of India's AI talent export flywheel and cross-border AI talent flow across Asia. For the model landscape, our Baidu ERNIE 5 coverage and Asian universities dominating AI research provide the peer comparison. On infrastructure, see Indonesia's sovereign AI stack.

Enterprise Adoption Playbook

  • Start with Sarvam-30B for high-volume, latency-sensitive workloads like customer service, document triage, and Indian-language translation.
  • Use Sarvam-105B for reasoning-heavy workloads where the price-performance arithmetic favours domestic deployment over frontier API calls.
  • Combine with a frontier US or European model kept on standby for edge cases, creating a two-model stack with Sarvam as default.
  • Validate compliance with India's Digital Personal Data Protection Act requirements, noting that domestic-hosted models simplify data residency obligations.
  • Track IndiaAI Mission's second tranche of funding, which may include support for fine-tuning Sarvam for specific sectors.

Three Things To Watch Next

  • The pace at which Sarvam 3.0 or a successor emerges. India's first-mover advantage only holds if model velocity stays competitive.
  • Enterprise pricing disclosure. Sarvam's API pricing has not been publicly benchmarked against OpenAI, Anthropic, and Google at comparable throughput levels.
  • Regional deployment partnerships. A Singapore or UAE deployment would signal Sarvam's ambition beyond India's domestic market.
The AI in Asia View Sarvam-30B and 105B change the Indian enterprise AI conversation from "we hope for a local model" to "which local model should we deploy and how". That is a material shift, and it lands at the same time as India's sovereign compute capacity is coming online. We expect Indian public-sector procurement to quietly mandate Sarvam compatibility for certain high-sensitivity workloads within twelve months, which would be the strongest possible commercial push. The question the market has not yet answered is how quickly Sarvam can move from Indian-optimised to multi-language South Asian and Gulf deployment, because that is where the addressable market gets interesting for investors.

Frequently Asked Questions

What are Sarvam-30B and Sarvam-105B?

They are open-source large language models released by Sarvam AI on 6 March 2026, trained in India on compute provided by the IndiaAI Mission. Sarvam-30B is optimised for throughput on mid-tier accelerators, while Sarvam-105B targets frontier-class reasoning benchmarks.

Where can enterprises access Sarvam models?

Model weights are downloadable from AI Kosh and Hugging Face. API access is available via Sarvam's developer dashboard.

How do Sarvam models compare with GPT and Claude on benchmarks?

Sarvam-105B matches frontier-class models on Math500 (98.6) and scores competitively on MMLU Pro (81.7) and LiveCodeBench v6 (71.7). On Indian language benchmarks, Sarvam-105B achieves state-of-the-art, outperforming significantly larger models.

Can Sarvam models run on-premise?

Yes. Sarvam-30B is specifically optimised for mid-tier accelerators like Nvidia L40S and delivers 1.5x-3x throughput improvements over comparable models at 28K input / 4K output sequence lengths, making on-premise deployment commercially viable.

Does the IndiaAI Mission limit who can use Sarvam?

No. The models are open-source under standard licensing. The IndiaAI Mission provided the compute for training, but use of the models is not restricted to Indian entities.

Which Indian-language or reasoning workload are you most likely to shift from a frontier API to Sarvam? Drop your take in the comments below.

Advertisement

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Be the first to share your perspective on this story

Advertisement

Advertisement

This article is part of the Enterprise AI 101 learning path.

Continue the path รขย†ย’

No comments yet. Be the first to share your thoughts!

Leave a Comment

Your email will not be published