The Labelling Wars: Asia Is Leading the Global Race to Tag Every Piece of AI Content
Every deepfake, every AI-generated news anchor, every synthetic voice clone now carries a target on its back. Across Asia, governments are rolling out the most ambitious content labelling mandates the world has ever seen, requiring that artificial intelligence identify itself before it can speak, write, or show its face. China enforced its AI labelling rules in September 2025. South Korea’s AI Basic Act followed in January 2026. India joined the fray in February with three-hour takedown deadlines. Vietnam’s AI Law kicked in on 1 March 2026. And the European Union, the self-proclaimed global standard-setter, will not enforce its own transparency rules until August. Asia is not waiting for the West to set the rules on synthetic media. It is writing them.
By The Numbers
- 1,530%: Increase in deepfake fraud across Asia-Pacific from 2022 to 2023, the fastest growth of any region globally (Sumsub Identity Fraud Report, 2024)
- 1,625%: South Korea’s deepfake fraud growth rate in the same period, the highest of any single country worldwide (Sumsub, 2024)
- $869 million: Projected size of Asia-Pacific’s deepfake detection market by 2031, up from $54.8 million in 2023 (The Insight Partners)
- 6,000+: Members of the Content Authenticity Initiative, the global coalition driving adoption of C2PA provenance standards (Content Authenticity Initiative, 2026)
- 3 hours: Maximum window for Indian platforms to remove unlawful AI-generated content under the IT Rules Amendment 2026 before losing safe harbour protection
- 30+: Enterprises in China’s AI-Generated Content Labeling Ecosystem Alliance, formed for cross-platform label verification (Shanghai CAC, 2025)
China Sets the Pace With Mandatory Dual Labelling
China was the first major economy to enforce comprehensive AI content labelling at scale. The Measures for Labeling Artificial Intelligence-Generated Content, released by the Cyberspace Administration of China (CAC) on 14 March 2025 and effective from 1 September 2025, require two distinct layers of identification on every piece of AI-generated content that could mislead the public.
The first layer is explicit: visible text, audio cues, or graphic overlays that ordinary users can immediately recognise. A chatbot response, a face-swapped video, a voice clone, or a synthetic image must carry a clear marker such as “AI-generated” in Chinese characters. The second layer is implicit: embedded metadata containing the provider’s name, a unique content identifier, and encrypted watermarks that survive compression, cropping, and redistribution.
Platforms bear a heavy burden under these rules. They must detect incoming content, categorise it into three tiers (confirmed, possible, or suspected AI-generated), and reinforce or add labels accordingly. The CAC’s 2025 “Qinglang” enforcement campaign has already targeted unlabelled deepfakes and synthetic misinformation. By March 2026, regulators publicly urged short-form video platforms to standardise their labelling practices after finding significant inconsistencies in how different apps handled AI-generated clips.
China’s labelling regime is the most operationally detailed in the world. It doesn’t just tell platforms to label content; it tells them how to detect it, how to categorise it, and how to verify labels across platforms.” — Rogier Creemers, Research Fellow, Leiden University
To support cross-platform verification, the Shanghai CAC established the AI-Generated Content Labeling Ecosystem Alliance in late 2025, bringing together more than 30 enterprises to develop shared detection protocols and mutual label recognition. China has also published a national technical standard, GB 45438-2025, specifying labelling methods for different content formats.
South Korea and India Join the Regulatory Sprint
If China wrote the first chapter, South Korea and India are writing the second and third at speed.
South Korea’s Framework Act on Artificial Intelligence, commonly known as the AI Basic Act, took effect on 22 January 2026. Article 31 requires that synthetic sounds, images, or videos that are “indistinguishable from reality” carry clear labels identifying them as AI-generated. The law draws a practical distinction: clearly artificial outputs such as cartoons or stylised artwork need only carry invisible digital watermarks, while photorealistic deepfakes must display visible labels. South Korea’s advertising sector faces additional obligations. From early 2026, all AI-generated or AI-assisted advertisements must be labelled, with portal and platform operators required to provide labelling tools and notify content providers of their obligations.
South Korea’s approach is notable because it ties labelling obligations to the degree of realism. A cartoon avatar and a photorealistic deepfake face different rules, which makes the system proportionate and easier for creators to navigate.” — Dr Seongcheol Kim, Professor of Communication, Korea University
India moved even faster on enforcement teeth. The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules Amendment, notified on 10 February 2026 and effective from 20 February, introduces the concept of “Synthetically Generated Information” (SGI). Platforms must implement reasonable technical and organisational measures to detect deepfakes, apply AI content labels, and deploy provenance technologies that help users distinguish synthetic from authentic media.
The penalties for non-compliance are sharp. Non-consensual intimate deepfake imagery must be removed within two hours. Other unlawful AI-generated content, including misinformation, impersonation, and forged documents, must come down within three hours. Miss the deadline and a platform loses its safe harbour protection, exposing it to direct legal liability.

Vietnam became the first Southeast Asian nation to enforce a standalone AI law when its Law on Artificial Intelligence took effect on 1 March 2026. The law mandates content labelling for generative AI under a risk-based model, with compliance grace periods extending to September 2027 for legacy systems in health, education, and finance. It signals a broader shift across ASEAN, where the Philippines intends to push an AI regulatory framework during its 2026 chairmanship and Indonesia’s AI presidential regulations are expected by mid-year.
The European Union, often positioned as the global pacesetter on tech regulation, is arriving late to the labelling party. The EU AI Act’s Article 50 transparency obligations, which require that AI-generated synthetic content be marked in a machine-readable format and that deepfakes be labelled for users, will not become enforceable until August 2026. A Code of Practice on marking and labelling AI-generated content, expected to be finalised in May or June 2026, proposes a multilayered approach combining metadata embedding, imperceptible watermarks, and a common “EU icon” that citizens can recognise at a glance.
The technical backbone for much of this global effort is the Coalition for Content Provenance and Authenticity (C2PA) standard. C2PA provides an open specification for attaching provenance information to digital files, recording whether content was created by AI, edited, or captured by a camera. The standard is designed to be interoperable and agnostic to the specific watermarking technology used. Its consumer-facing implementation, Content Credentials, has been adopted by major platforms and is increasingly referenced in national regulations. China’s GB 45438-2025 standard, for instance, aligns with C2PA’s metadata principles while adding China-specific requirements for provider identification.
| Country | Law / Rule | Effective Date | Labelling Approach | Key Enforcement Mechanism |
|---|---|---|---|---|
| China | Measures for Labeling AI-Generated Content | 1 Sep 2025 | Dual: explicit (visible) + implicit (metadata/watermark) | Qinglang campaigns, licence revocation |
| South Korea | AI Basic Act, Article 31 | 22 Jan 2026 | Tiered: visible labels for realistic content, invisible watermarks for stylised | KCC guidelines, advertising compliance mandates |
| India | IT Rules Amendment 2026 | 20 Feb 2026 | Mandatory SGI detection + provenance tech | 2 to 3 hour takedown or loss of safe harbour |
| Vietnam | Law on Artificial Intelligence | 1 Mar 2026 | Risk-based labelling for generative AI | Grace periods to Sep 2027 for legacy systems |
| EU | AI Act, Article 50 | Aug 2026 | Multilayered: metadata, watermarks, common icon | Code of Practice (voluntary pre-Aug 2026) |
The speed of Asia’s regulatory response has created an uncomfortable reality for global technology companies: there is no single standard for AI content labelling, and compliance in one jurisdiction does not guarantee compliance in another.
China’s dual-layer system demands both visible markers and embedded metadata with specific provider identifiers. South Korea’s tiered approach requires different labelling for different levels of realism. India’s framework focuses on speed, with takedown deadlines that demand real-time detection capabilities. Vietnam’s risk-based model introduces grace periods that create a compliance patchwork within a single country. And the EU’s multilayered approach, still being finalised, adds yet another set of technical specifications.
For multinational platforms serving users across Asia and Europe, the result is a fragmentation headache. A piece of synthetic content posted in Singapore might need to comply with India’s SGI rules if it reaches Indian users, China’s explicit labelling requirements if it surfaces on Chinese platforms, and South Korea’s tiered system if it appears in Korean advertising. The compliance costs are already running into billions across the region.
The C2PA standard offers a partial solution. By embedding provenance information at the point of creation, it provides a universal metadata layer that different national systems can read and interpret according to their own rules. But C2PA does not solve the visible labelling problem: a watermark that satisfies China’s explicit label requirement may not match South Korea’s format for deepfake disclosures or the EU’s proposed common icon.
The technology for content provenance is maturing fast, but the policy layer is fragmenting just as quickly. We risk building a world where every country can read a content credential but interprets it differently.” — Andrew Jenks, Chair, C2PA Steering Committee
Detection remains another weak link. Human accuracy in identifying high-quality video deepfakes hovers at just 24.5%, and defensive AI detection tools see their effectiveness drop by 45% to 50% when tested against real-world deepfakes outside controlled laboratory conditions (Bright Defense, 2026). Until detection catches up, labelling regimes will depend heavily on upstream compliance by AI providers rather than downstream policing by platforms.
The AIinASIA View: Asia’s AI content labelling push is the most consequential regulatory development in synthetic media since the invention of Photoshop. China, South Korea, India, and Vietnam have collectively created a labelling infrastructure that covers more than 4 billion people, months before the EU’s rules even take effect. But speed without coordination risks fragmentation. The C2PA standard is the closest thing the world has to a universal provenance layer, yet no country has fully adopted it as its sole framework. What businesses need now is not more regulation but interoperability: mutual recognition agreements that let a label applied in Seoul satisfy a regulator in Delhi, Beijing, or Brussels. The alternative is a compliance maze that punishes the platforms trying hardest to be transparent.
Frequently Asked Questions
What is AI content labelling and why does it matter?
AI content labelling is the practice of marking text, images, audio, or video as having been generated or substantially altered by artificial intelligence. It matters because the volume of synthetic media is growing exponentially, deepfake fraud across Asia-Pacific surged 1,530% in recent years, and without clear labels, the public cannot distinguish real from fabricated content.
Which Asian country has the strictest AI labelling rules?
China currently has the most operationally detailed regime, requiring both visible labels and embedded metadata with provider identification. India, however, has the sharpest enforcement teeth, with platforms risking loss of safe harbour protection if they fail to remove unlawful AI content within two to three hours.
How does the C2PA standard help with AI content labelling?
The Coalition for Content Provenance and Authenticity provides an open technical specification for embedding provenance information into digital files at the point of creation. This metadata records whether content was made by AI, edited, or captured by a camera, giving platforms and regulators a verifiable trail without relying solely on visible markers.
Will AI content labelling stop deepfakes?
Labelling alone will not eliminate deepfakes, but it raises the cost of deception. When platforms can detect and flag synthetic content through embedded metadata and watermarks, bad actors lose the advantage of anonymity. The bigger challenge is detection accuracy: current tools lose up to 50% effectiveness against real-world deepfakes, which means enforcement still depends heavily on creators and platforms complying voluntarily.
How do these rules affect businesses operating across Asia?
Companies serving multiple Asian markets face a compliance patchwork. A single piece of AI-generated content may need to meet China’s dual-layer requirements, South Korea’s tiered labelling, and India’s rapid takedown deadlines simultaneously. Adopting the C2PA standard at the point of content creation offers the best current strategy for building a compliance baseline, but visible labelling requirements still vary by jurisdiction.
Where the Labels Lead
Asia’s content labelling push is not a sideshow to the bigger AI regulation story. It is the story. While debates about foundation model safety and frontier AI risk command headlines, the labelling mandates taking effect across the region will shape how billions of people interact with AI-generated content every day. The technical standards exist. The legal frameworks are live. What remains is the hardest part: making them work together across borders so that transparency does not become a luxury reserved for users in whichever jurisdiction got its rules right first.
Drop your take in the comments below.










Latest Comments (1)
I usually just read these updates only, but this point about India's 3-hour takedown deadline under the IT Rules Amendment 2026 is quite significant. We actually needed this kind of urgency to tackle deepfakes, na?
Leave a Comment