Skip to main content
AI in ASIA
AI governance India
South Asia

India: Scale, Rights, and Responsible Digital Infrastructure

India's massive digital infrastructure serves 125 crore users, but governance gaps and rights concerns emerge as 5G reaches 85% population coverage.

Intelligence Desk6 min read

AI Snapshot

The TL;DR: what matters, fast.

India has 125 crore wireless subscribers and 85% 5G population coverage

Digital divides persist despite 103 crore internet users and affordable data costs

AI leadership emerges but algorithmic bias concerns threaten equitable access

Advertisement

Advertisement

Policy Status

Policy status

In force

Effective date

February 2026

Applies to

Both

Regulatory impact

High
south-asia
India
voluntary framework

Quick Overview

India is charting a distinctive path in AI governance — rejecting standalone AI legislation in favour of layering binding rules onto existing legal frameworks while issuing voluntary guidelines for broader industry self-regulation. The IT Rules 2026 Amendment, effective February 20, 2026, introduced India's first binding obligations specifically targeting AI-generated content, including mandatory labelling of synthetic media, rapid takedown timelines for harmful deepfakes, and automated detection requirements for platforms. Meanwhile, the India AI Governance Guidelines released in November 2025 under the IndiaAI Mission establish a voluntary, principles-based framework covering safety, transparency, accountability, and inclusive development. India's approach reflects a deliberate strategy: regulate immediate harms through enforceable rules while preserving flexibility for a fast-moving AI ecosystem worth an estimated $17 billion by 2027.

What's Changing

The most significant regulatory shift came with the IT Rules 2026 Amendment (notified February 10, 2026, effective February 20, 2026), which amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. This amendment specifically targets Synthetically Generated Information (SGI) — content created or substantially modified by AI including deepfakes, synthetic audio, and AI-generated text. Key binding obligations include: a 2-hour takedown requirement for non-consensual intimate AI-generated imagery reported by affected individuals; a 3-hour takedown window for other categories of unlawful synthetic content flagged through government channels; mandatory visible labelling and metadata tagging of all AI-generated content distributed on intermediary platforms; and requirements for platforms to deploy automated detection systems capable of identifying harmful synthetic content at scale. Separately, the India AI Governance Guidelines (November 5, 2025) issued by MeitY under the IndiaAI Mission provide a voluntary framework organised around seven principles — safety and reliability, transparency, accountability, positive societal impact, inclusivity, privacy and security, and innovation promotion. These guidelines apply to AI developers, deployers, and users but carry no statutory penalties.

Who's Affected

The IT Rules 2026 Amendment directly impacts social media intermediaries, messaging platforms, content-hosting services, and any digital platform operating in India that enables the distribution of AI-generated content. Major platforms including Meta, Google, X, and domestic players like ShareChat and Koo must implement synthetic content detection infrastructure and comply with accelerated takedown timelines. AI model developers and deployers creating tools capable of generating synthetic media face obligations around content labelling and watermarking. The voluntary India AI Governance Guidelines cast a wider net, addressing AI startups, enterprise adopters, cloud service providers, government agencies deploying AI in public services, and research institutions. Sector-specific regulators — including the RBI for financial services, SEBI for capital markets, IRDAI for insurance, and the National Medical Commission for healthcare AI — are expected to layer additional domain-specific requirements. India's 800+ million internet users and the country's position as a global AI talent hub and outsourcing destination mean these rules have significant cross-border implications for multinational technology companies.

Core Principles

India's AI governance architecture rests on several interconnected pillars. The Digital Personal Data Protection (DPDP) Act 2023, India's first comprehensive data protection law, provides the foundational legal backstop for AI systems processing personal data, establishing consent frameworks, data fiduciary obligations, and cross-border transfer rules. The IT Rules 2026 Amendment layers AI-specific content obligations onto the existing intermediary liability framework. The India AI Governance Guidelines articulate seven voluntary principles: safety and reliability (testing, risk assessment, and monitoring throughout the AI lifecycle); transparency and explainability (clear disclosure of AI system capabilities and limitations); accountability (defined roles and governance structures for AI outcomes); positive societal impact (alignment with national development goals); inclusivity and non-discrimination (bias mitigation and equitable access); privacy and security (data minimisation and cybersecurity measures); and innovation and open collaboration (support for research and responsible experimentation). Institutional governance is coordinated through the IndiaAI Mission's three-tier structure: an AI Governance Group providing inter-ministerial policy coordination, a Technology and Policy Expert Committee offering technical guidance, and the IndiaAI Safety Institute conducting safety research and evaluation.

What It Means for Business

For businesses operating in India, the dual-track regulatory approach creates both clear compliance requirements and broader guidance expectations. The IT Rules 2026 Amendment imposes immediate, enforceable obligations: platforms must build or integrate synthetic content detection capabilities, implement content labelling systems, establish rapid-response takedown workflows meeting the 2-3 hour timelines, and maintain compliance documentation. Non-compliance can trigger intermediary safe harbour protections being revoked, exposing platforms to direct liability. The voluntary guidelines, while not legally binding, signal government expectations and are likely to influence procurement criteria for government AI contracts, investor due diligence standards, and sector-specific regulatory guidance from bodies like the RBI and SEBI. Companies should treat the voluntary principles as a compliance roadmap — particularly around AI impact assessments, bias audits, transparency disclosures, and grievance mechanisms. India's massive domestic market, its role as a global technology services hub, and the government's stated intention to regulate AI through existing legal frameworks rather than standalone legislation mean that businesses need sector-aware compliance strategies rather than a single AI compliance programme.

What to Watch Next

Several developments will shape India's AI governance trajectory in 2026 and beyond. The operationalisation of the DPDP Act — particularly the establishment of the Data Protection Board of India and the finalisation of subordinate rules — will clarify how data protection intersects with AI training and deployment. The Digital India Act, originally proposed to replace the 25-year-old IT Act 2000, has been effectively sidelined; the government has instead chosen to regulate emerging technologies through targeted amendments to existing rules. Watch for sector-specific AI guidance from the RBI (which has already signalled interest in responsible AI in lending and credit scoring), SEBI, and healthcare regulators. The IndiaAI Safety Institute, modelled on the UK and US AI safety institutes, is expected to begin publishing safety evaluation frameworks and incident reporting protocols. India's engagement with the Global Partnership on AI (GPAI) and the Hiroshima AI Process will influence how its domestic framework aligns with international interoperability standards. The government has signalled that additional binding rules targeting AI in hiring, education, and criminal justice may follow if voluntary compliance proves insufficient.

← Scroll to see full table →

AspectIndiaJapanSouth Korea
Approach TypeRights-based with sector rulesPrinciples-basedRights-based
Legal StrengthStrong privacy lawVoluntaryModerate
Focus AreasPrivacy, inclusion, fairnessSafety, transparencyPrivacy, accountability
Lead BodiesMeitY, RBI, IRDAI, NDHMMETI, Cabinet OfficeMSIT, PIPC

Last editorial review: March 2026

Related coverage on AIinASIA explores how these policies affect businesses, platforms, and adoption across the region. View AI regulation coverage

This overview is provided for general informational purposes only and does not constitute legal advice. Regulatory frameworks may evolve, and readers should consult official government sources or legal counsel where appropriate.

What did you think?

Written by

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path →