Sarvam-105B Is India's First Competitive Open-Source LLM, and the Enterprise Roadmap Just Became Real
India now has a domestic open-source large language model that matches frontier reasoning benchmarks. Sarvam AI released Sarvam-30B and Sarvam-105B on 6 March 2026, trained entirely in India on computeโฆ provided under the IndiaAI Mission. Weights are downloadable from AI Kosh and Hugging Face, APIโฆ access is live, and both models are already powering Sarvam's own production products. For Indian banks, telcos, and public-sector buyers that have been waiting for a credible local alternative to Western frontier models, the wait is over.
The quantitative case for Sarvam-105B is stronger than many international observers expected. It is not just "good for an Indian model". It is competitive with the top tier on reasoning and maths, and sets state-of-the-artโฆ on Indian language benchmarks. For Asia more broadly, it is a demonstration that sovereign AIโฆ models are reaching commercial viability rather than staying as policy signalling.
The Benchmark Numbers Are What Matter
Sarvam-105B achieves 98.6 on Math500, matching top frontier models. It scores 71.7 on LiveCodeBench v6, outperforming most competitors on coding tasks. On knowledge, Sarvam-105B hits 90.6 on MMLU and 81.7 on MMLU Pro, putting it in the frontier-class band. Instruction-following sits at 84.8 on IF Eval.
Sarvam-30B is the more commercially interesting model. It achieves 88.3 Pass@1 on AIME 25 (rising to 96.7 with tool use) and 66.5 on GPQA Diamond, while using only 2.4 billion active parametersโฆ and delivering 1.5x to 3x throughput improvements on mid-tier accelerators like Nvidia L40S compared with similar models at 28K input / 4K output sequence lengths.
Both Sarvam-30B and 105B achieve state-of-the-art results on Indian language benchmarks, outperforming significantly larger models.
The implication for Indian enterprises is concrete. A bank, hospital, or ride-hailing platform running Sarvam-30B on an L40S cluster can match or beat the quality of frontier-model API calls on the workloads that drive most of their inferenceโฆ volume, while keeping data inside India's sovereign compute footprint.
The IndiaAI Mission Just Validated Its Spending Thesis
The IndiaAI Mission launched in March 2024 with a budget of โน10,370 crore (roughly $1.25 billion) earmarked for compute, model development, skilling, and data platforms. Sarvam-30B and 105B are the first major open-source deliverables trained on mission-funded compute, which changes the political economy of Indian AI.
Before 6 March, the Mission's supporters had to argue that sovereign compute would one day produce competitive models. After 6 March, they can point to benchmarkโฆ numbers. That matters for the next phase of Mission funding, which is expected to expand through 2026 and 2027.
By The Numbers
- 98.6: Sarvam-105B score on Math500, matching frontier-class frontier models.
- 90.6: Sarvam-105B score on MMLU, and 81.7 on MMLU Pro.
- 2.4B: Active parameters in Sarvam-30B, delivering 1.5x-3x throughput on L40S accelerators.
- 88.3 (Pass@1): Sarvam-30B on AIME 25 benchmark, rising to 96.7 with tool use.
- โน10,370 crore: IndiaAI Mission budget backing domestic compute, roughly $1.25 billion.
Samvaad and Indus: Production Products, Not Demos
Sarvam-30B powers Samvaad, Sarvam's conversational AI platform, while Sarvam-105B powers Indus, an AI assistant targeting complex reasoning and agenticโฆ workflows. Both products are live and shipping to enterprise customers.
That detail matters more than the benchmarks. Indian AI has had credible research labs for a decade, but the gap from research paper to production product has consistently been where Indian AI loses to Chinese and US competitors. Sarvam shipping two production products on release day signals a different operating model: vertically integrated from model training to application layer, optimised for enterprise deployment rather than consumer benchmarks.
| Model | Parameters (Active) | Best-In-Classโฆ Benchmarks | Production Use Case | Availability |
|---|---|---|---|---|
| Sarvam-30B | 2.4B active | AIME 25: 88.3 (96.7 with tools) | Samvaad conversational AI | AI Kosh, Hugging Face, API |
| Sarvam-105B | Dense | Math500: 98.6, MMLU: 90.6 | Indus reasoning and agentic workflows | AI Kosh, Hugging Face, API |
| IndiaAI Mission compute | - | GPUโฆ capacity | Training infrastructure | Government-provided |
What This Means for India's Regional AI Position
Indian enterprises, government agencies, and education institutions can now specify a credible domestic LLMโฆ in procurement. That was not true even three months ago.
The implications for ASEAN, the Gulf, and African markets are worth tracking. Sarvam's open weights, combined with strong Indian-language performance, make the models attractive for any market with linguistic overlap with South Asian languages. Tamil, Bengali, and Hindi-speaking diaspora markets across Singapore, Malaysia, and the UAE are obvious initial deployment contexts.
The Gulf Cooperation Council's sovereign AI agenda, which we have covered in earlier reporting, may be most strongly influenced by Sarvam's template: Indian training infrastructure, Indian-language specialisation, and an open-source release strategy. That combination gives customer markets that cannot host the training a clear, governable path to deployment.
Sarvam-105B is described as the first competitive Indian open-source LLM.
For context on the broader India AI economy, see our coverage of India's AI talent export flywheel and cross-border AI talent flow across Asia. For the model landscape, our Baidu ERNIE 5 coverage and Asian universities dominating AI research provide the peer comparison. On infrastructure, see Indonesia's sovereign AI stack.
Enterprise Adoption Playbook
- Start with Sarvam-30B for high-volume, latency-sensitive workloads like customer service, document triage, and Indian-language translation.
- Use Sarvam-105B for reasoning-heavy workloads where the price-performance arithmetic favours domestic deployment over frontier API calls.
- Combine with a frontier US or European model kept on standby for edge cases, creating a two-model stack with Sarvam as default.
- Validate compliance with India's Digital Personal Data Protection Act requirements, noting that domestic-hosted models simplify data residency obligations.
- Track IndiaAI Mission's second tranche of funding, which may include support for fine-tuningโฆ Sarvam for specific sectors.
Three Things To Watch Next
- The pace at which Sarvam 3.0 or a successor emerges. India's first-mover advantageโฆ only holds if model velocity stays competitive.
- Enterprise pricing disclosure. Sarvam's API pricing has not been publicly benchmarked against OpenAI, Anthropic, and Google at comparable throughput levels.
- Regional deployment partnerships. A Singapore or UAE deployment would signal Sarvam's ambition beyond India's domestic market.
Frequently Asked Questions
What are Sarvam-30B and Sarvam-105B?
They are open-source large language models released by Sarvam AI on 6 March 2026, trained in India on compute provided by the IndiaAI Mission. Sarvam-30B is optimised for throughput on mid-tier accelerators, while Sarvam-105B targets frontier-class reasoning benchmarks.
Where can enterprises access Sarvam models?
Model weights are downloadable from AI Kosh and Hugging Face. API access is available via Sarvam's developer dashboard.
How do Sarvam models compare with GPT and Claude on benchmarks?
Sarvam-105B matches frontier-class models on Math500 (98.6) and scores competitively on MMLU Pro (81.7) and LiveCodeBench v6 (71.7). On Indian language benchmarks, Sarvam-105B achieves state-of-the-art, outperforming significantly larger models.
Can Sarvam models run on-premise?
Yes. Sarvam-30B is specifically optimised for mid-tier accelerators like Nvidia L40S and delivers 1.5x-3x throughput improvements over comparable models at 28K input / 4K output sequence lengths, making on-premise deployment commercially viable.
Does the IndiaAI Mission limit who can use Sarvam?
No. The models are open-source under standard licensing. The IndiaAI Mission provided the compute for training, but use of the models is not restricted to Indian entities.
Which Indian-language or reasoning workload are you most likely to shift from a frontier API to Sarvam? Drop your take in the comments below.








No comments yet. Be the first to share your thoughts!
Leave a Comment