Skip to main content
AI in Asia
India's AI Future: New Ethics Boards
· Updated Apr 21, 2026 · 4 min read

India's AI Future: New Ethics Boards

India launches comprehensive AI governance framework with two new regulatory bodies, setting global standards for balanced innovation and citizen protection.

AI Snapshot

The TL;DR: what matters, fast.

India establishes two AI regulatory bodies AIGG and TPEC by December 2025

Framework emphasizes coordinated governance across sectors vs fragmented rules

India processes 82.3 billion AI transactions, ranking second globally in adoption

India's Ministry of Electronics and Information Technology (MeitY) has unveiled its AI Governance Guidelines, establishing a coordinated national approach to artificial intelligence oversight. The guidelines introduce two new regulatory bodies and represent a shift from fragmented sector-specific rules to unified governance principles. For a country with the world's largest digital population and one of the fastest-growing AI industries, the framework has implications that extend far beyond India's borders. The approach positions India as a leader in balanced AI regulation, prioritising innovation while maintaining citizen protection.

The timing of the framework matters. India's AI industry added more than 200,000 jobs during 2025, and enterprise AI adoption has accelerated across financial services, healthcare, and manufacturing. Without coordinated governance, the risk of fragmented and inconsistent rules across sectors would have grown rapidly. The new framework aims to prevent this fragmentation while preserving the flexibility that has supported India's AI growth.

Dual governance bodies launch by December 2025

The guidelines establish two critical institutions to oversee India's AI ecosystem. The Artificial Intelligence Governance Group (AIGG) is the primary national coordinating body responsible for policy alignment across ministries and regulatory frameworks. The Technology and Policy Expert Committee (TPEC) is a technical advisory body providing expertise on standards, implementation guidelines, and emerging AI technologies.

Both bodies are committed to being operational by December 2025, signalling government urgency in establishing formal oversight mechanisms. The coordination mandate explicitly covers unified approach across sectors including healthcare, finance, and media rather than allowing isolated regulatory silos. This structured approach reflects lessons learned from other jurisdictions including the EU, where multiple overlapping regulatory bodies have created complexity for businesses operating across sectors.

The AIGG is chaired by a senior MeitY official and includes representation from other ministries with AI-relevant portfolios including Health, Finance, Home Affairs, and Labour. TPEC draws on academic and industry expertise, with members appointed for technical expertise rather than political affiliation. The combination provides political authority at the AIGG level and technical depth at the TPEC level.

The principles-based foundation

India's framework rests on seven core principles: transparency, accountability, privacy, safety, non-discrimination, sustainability, and digital sovereignty. Each principle includes implementation guidance specific to AI contexts. The principles are aligned with international norms including the OECD AI Principles and UNESCO AI Ethics Recommendation, which helps Indian firms operate internationally while maintaining domestic compliance.

The digital sovereignty principle deserves specific attention. India's framework explicitly prioritises domestic data control, Indian-language AI capabilities, and support for Indian AI infrastructure. This aligns with the broader India AI Mission strategy and the IndiaAI Kosh compute initiative. The sovereignty emphasis is stronger than in Singapore's framework and reflects India's position as a large domestic market that can credibly pursue AI independence rather than being forced into pure alignment with international platforms.

The framework explicitly addresses Indic language AI. Models deployed for Indian consumers should handle relevant Indian languages with meaningful quality. This requirement favours Indian firms including Sarvam AI and Krutrim that have built language capability from the ground up, while raising the bar for international firms targeting Indian consumers. MeitY's official portal hosts the framework documents and implementation guidance.

Sector-specific implementation under the unified framework

The unified framework is being translated into sector-specific guidance through partnerships between the AIGG and sector regulators. The Reserve Bank of India has begun updating its banking AI guidance to align with the new framework. The Insurance Regulatory and Development Authority is doing the same for insurance. The Medical Council of India and relevant health authorities are addressing healthcare AI governance.

Each sector adapts the general principles to sector-specific context. Banking focuses heavily on fraud prevention, credit risk, and anti-money laundering use cases. Healthcare emphasises clinical safety, patient consent, and medical device regulation. Media and content sectors focus on content authenticity, deepfake detection, and intellectual property protection.

The sector-specific implementation has been generally welcomed by industry. Unlike a pure horizontal regulation that tries to cover all use cases with identical rules, the vertical implementation allows detailed guidance tailored to each sector's actual risks. This approach mirrors the Singapore model but with more explicit regulatory enforcement at the sector level.

How the framework compares internationally

India's framework sits between the EU's prescriptive AI Act and Singapore's principles-based approach. It has more formal enforcement mechanisms than Singapore's voluntary framework but less prescriptive requirements than the EU approach. The middle path may be attractive to other emerging economies looking for a workable governance model that does not require EU-level compliance infrastructure.

Key differences from the EU approach include the absence of risk-tier categorisation with automatic compliance requirements, less emphasis on conformity assessment requirements, and more reliance on sector regulators for specific implementation. These differences reduce compliance costs but create more uncertainty about specific requirements. For businesses, the trade-off depends on the specific use case and the predictability required.

Key differences from the Singapore approach include more explicit enforcement authority, more formal regulatory bodies, and clearer sector-specific requirements. The differences reflect India's larger market size, greater regulatory capacity, and more ambitious industrial policy objectives. Brookings analysis of emerging economy AI governance has noted that India's approach provides a template adaptable to other large emerging economies.

Enforcement and compliance mechanisms

Enforcement under the new framework combines multiple mechanisms. The AIGG has authority to issue guidance and coordinate cross-sector action but relies on sector regulators for direct enforcement. The Digital Personal Data Protection Act of 2023 provides privacy-related enforcement authority that applies to AI systems processing personal data. Consumer protection law provides redress for AI system harms affecting individuals.

The framework includes mandatory incident reporting for AI systems affecting critical infrastructure or producing significant harm. Reporting requirements apply to both domestic and international AI providers operating in India. The requirement is similar to the EU AI Act's reporting provisions but with different thresholds and reporting formats.

For cross-border AI services, the framework's digital sovereignty emphasis affects where AI processing can occur. Certain categories of sensitive data must remain in Indian data centres or be processed by providers with Indian operational presence. This affects how international AI providers structure their Indian operations and may require additional infrastructure investment.

What the framework means for AI companies operating in India

For Indian AI companies, the framework provides regulatory clarity that supports long-term investment planning. Sarvam AI, Krutrim, Ola's AI division, Reliance Jio's AI group, and other Indian firms now operate under a clearer regulatory environment. The sovereignty emphasis specifically benefits Indian firms over international competitors in certain use cases.

For international AI providers, the framework requires attention to Indian-specific requirements. OpenAI, Anthropic, Google, Microsoft, and other global providers need to demonstrate compliance with the framework's principles and support for Indic languages. The compliance work is substantial but manageable for firms with existing global compliance capability.

Chinese AI providers face more complex implications. Security concerns about Chinese AI in India are real and ongoing. The framework does not explicitly discriminate against Chinese providers, but implementation decisions could in practice make Chinese AI deployment in India more difficult. ByteDance, Alibaba, and Tencent have all scaled back India-facing AI offerings during 2025 partly in response to regulatory uncertainty.

Open questions and next steps

Several questions about the framework remain open. Enforcement capacity at the AIGG is untested. The body has been established but its ability to coordinate across powerful sector regulators will be tested in practice. Political support for the framework may vary across government changes, which creates uncertainty about long-term stability.

The intersection with the Digital Personal Data Protection Act requires clarification. The two frameworks have overlapping jurisdiction in areas involving AI processing of personal data, and resolution of this overlap will affect compliance complexity. Industry associations including NASSCOM and the Data Security Council of India are working with MeitY to clarify the intersection.

The NASSCOM policy engagement with the framework has been constructive but has highlighted specific implementation challenges that need resolution. Standard-setting processes, compliance certification, and cross-border data transfer rules all require additional specificity beyond the principles-level framework. For now, India's framework represents one of the most serious large-country approaches to AI governance outside the EU. Whether it achieves the balance of innovation enablement and citizen protection that it aspires to remains to be seen in practice, but the framework is credible enough to have already influenced thinking in multiple other jurisdictions.

Updates

  • Byline migrated from "Asia Desk - Mumbai" (raj-patel) to Intelligence Desk per editorial integrity policy.

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment