India's AI Labelling Rules Are Live, And Every Asian Platform Now Has A Compliance Problem
India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 quietly took effect on 20 February under gazette notification G.S.R. 120(E), and the practical consequences are only landing now. Any platform serving Indian users must visibly label AI-generated content, embed non-removable metadata, and take down flagged deepfakes within three hours. For Asian product teams, the grace period is over.
Why This Is A Pan-Asian Story, Not Just An Indian One
India's digital rulebook has an outsized reach. Meta, Google, ByteDance, Xiaohongshu, and almost every APAC social or creator platform count India among their top three markets by user volume. The new rules demand architectural changes, not PR statements, and they apply to any intermediary reachable by Indian users. The asymmetry matters: a Singapore-based platform that has never shipped an India-specific build is still on the hook.
That reach is why regulators in Jakarta, Kuala Lumpur, and Bangkok are watching this closely. The India framework is the first full statutory deepfake regime in the region that places the burden on platforms rather than originators. If enforcement sticks, expect the 3-hour timeline and non-removable metadata requirement to become de facto ASEAN reference points within 12 months.
By The Numbers
- 3 hours is the new takedown window for flagged unlawful AI content or harmful deepfakes, down from 36 hours under the previous 2021 rules. Source: MeitY Gazette G.S.R. 120(E), 10 February 2026.
- 10% of screen area is the minimum label coverage for AI-generated images, per the enforcement guidance that accompanies the rules.
- 5 million users is the threshold above which platforms must run automated verification of user AI declarations before upload.
- 7 days is the new general grievance response window, down from 15 days. 12 hours is the window for urgent complaints, halved from 24.
- February 20, 2026 was the effective date, giving Asian platforms just over 60 days to stand up labelling, metadata, and logging pipelines.
What The Rules Actually Require
Labels must be "prominent, visible, and non-removable" across AI-generated text, images, audio, and video. Routine edits like cropping or colour correction are exempt unless they distort meaning. Platforms must require user self-declaration, run technical verification above the 5-million-user threshold, and keep logs for accountability. Terms of service must now remind users of these rules every three months rather than annually.
The rules explicitly name deepfake pornography as priority unlawful content, with enforcement coordinated between MeitY, the Cyber Crime Coordination Centre, and state police. Non-compliance does not automatically strip safe-harbour protection under Section 79 of the IT Act, but platforms that publish unlabelled synthetic content or miss takedown windows open themselves to liability.
The Asian Compliance Gap
Here is the awkward truth for Asian platform teams: most do not have a content-provenance stack ready to ship. Watermarking is fine for first-party generated content, but the rules require labelling of uploaded content too, which means detection models, metadata propagation, and a user flow that neither creators nor uploaders are used to. C2PA-compatible metadata is the cleanest path, but adoption across Indian and Southeast Asian tooling remains patchy.
We thought we had another year. We do not. The Indian rules move the centre of gravity for content provenance from the G7 to Asia, and it is not going back.
The 3-hour takedown window is aggressive by any global standard. Platforms that have not invested in automated content triage are going to miss it.
What Regional Teams Should Do This Week
- Audit every surface where AI-generated content can reach Indian users, including embeds, cross-posts, and federated content.
- Stand up a labelling pipeline that survives re-uploads and covers the 10% image-area floor.
- Add the 3-hour complaint-to-takedown workflow, with escalation runbooks for weekends and public holidays.
- Make user declarations a first-class object in your upload APIโฆ, not a checkbox.
- Publish your updated terms of service and schedule the mandatory three-monthly user reminder.
| Requirement | Old rule (2021) | New rule (2026) |
|---|---|---|
| AI-content labelling | Not required | Mandatory, visible, non-removable |
| Takedown timeline | 36 hours | 3 hours |
| Grievance response | 15 days | 7 days |
| User AI self-declaration | Not required | Mandatory for upload |
| Re-notification of ToS | Annually | Every 3 months |
The rules sit alongside India's broader sovereign AIโฆ push, which we covered in our analysis of India's GPU subsidy and sovereign AI plan. Read together, they point to a country that wants to be a computeโฆ hub and a content regulator at the same time. That is an unusual combination, and one that Southeast Asian platforms will find themselves mirroring whether they like it or not. For the broader picture on regional governance, see our write-up of Korea's AI Basic Act enforcement and Vietnam's phased AI law.
Frequently Asked Questions
Which Asian platforms are affected by India's 2026 IT Rules?
Every intermediary with Indian users, regardless of where it is headquartered. That includes Chinese, Korean, Japanese, and Southeast Asian social platforms, creator tools, and any AI product that lets Indian users upload or generate content for public viewing.
What happens if a platform misses the 3-hour takedown window?
The intermediary risks losing safe-harbour protection under Section 79 of the IT Act for that specific content, and could be pulled into downstream civil or criminal liability. Repeat misses can trigger directions from MeitY, including temporary suspension of specific services.
Is there a de minimis exemption for small Asian platforms?
The 5-million-user threshold only relaxes the automated-verification obligation. Labelling, takedown, and logging requirements apply to all intermediaries that operate in India, regardless of user count.
Can existing C2PA or provenance metadata satisfy the rule?
The rules do not mandate C2PA specifically, but a C2PA-compliant provenance manifest is the cleanest path to meet the non-removable metadata requirement. Platforms should expect MeitY guidance on accepted technical standards to tighten during 2026.
This is the first major Asian content regime to put AI labelling enforcement on a 3-hour clock. Will your platform be ready when Indonesia or Malaysia copies it? Drop your take in the comments below.








No comments yet. Be the first to share your thoughts!
Leave a Comment