Thailand's AI Act Is Now Live, And The Compliance Window For High-Risk Systems Just Opened
Thailand's AI Act came into force on 1 March 2026, which means every foreign AI provider selling into the Thai market now needs a local representative, a risk classification, and a plan for the high-risk AI rules that are about to land. The Draft Royal Decree and Draft Prime Ministerial Notification published in February 2026 turn the law's framework into operational requirements, and the consultation period is the last moment to influence how tightly they bite. If your product team has been waiting for the policy to settle before shipping into Bangkok, the wait is over.
What The AI Act Actually Does
Thailand chose a risk-based model broadly aligned with the EU AI Act, as analysts at Baker McKenzie and Tilleke and Gibbins have documented. Unacceptable-risk AI systems, including those judged to threaten human rights, are outright prohibited. High-risk AI, which covers areas like justice, credit decisioning, hiring, and healthcare, requires registration, conformity assessment, risk management aligned with ISO/IEC 42001:2023, and serious-incident reporting. Limited-risk AI triggers transparency obligations, and minimal-risk AI is largely untouched.
The Electronic Transactions Development Agency, better known as ETDA, now operates an AI Governanceโฆ Center, which acts as the central compliance desk. Foreign providers have no way to sell high-risk AI into Thailand without appointing a local representative to handle registration, incident reports, and enforcement correspondence.
By The Numbers
- 1 March 2026: Thailand's AI Act effective date, making it the first ASEAN AI statute in force, per the Enersys briefing
- February 2026: Draft Royal Decree and Draft Prime Ministerial Notification on high-risk AI published for public consultation
- 4 rights for Thai individuals: the right to know, right to explanation, right to human oversight, and right to contest AI decisions
- 3 risk tiers codified in the statute: unacceptable, high, and limited, with a minimal-risk residual category
- 1 national AI registry planned under ETDA's AI Governance Center as the default compliance entry point
The High-Risk Rules Are Where The Work Sits
The statute is the easy part. The operational detail is in the Draft Royal Decree on high-risk AI, which the Formiti compliance guide summarises in more usable form. Providers of high-risk AI must register each system with ETDA, document their risk management processes against the ISO/IEC 42001:2023 standard, report serious incidents within defined windows, and keep technical documentation available for inspection.
Personal data protection rules tighten the screw further. The Personal Data Protection Committee, or PDPC, opened consultation in February 2026 on draft guidelines that mandate Data Protection Impact Assessments for any high-risk AI that processes personal data, and explicitly prohibit training AI models on personal data without a legal basis. The net effect is that a Thai AI deployment now pulls in three regimes simultaneously: the AI Act itself, the Personal Data Protection Act, and sector-specific rules in finance, health, and public administration.
The AI Act creates a binding, cross-sector compliance layer in Thailand, and foreign providers cannot rely on Thai partners to carry the full regulatory load any longer.
The high-risk rules are the compliance pressure point, and the consultation window closing in 2026 will determine whether they become the ASEAN template.
Where Thailand Sits Against The Rest Of Asia
The Thai approach is neither the toughest in Asia nor the softest. The table below lines up the region's live and draft AI regimes as product teams should read them in April 2026.
| Jurisdiction | Status | Core model | Foreign provider obligations |
|---|---|---|---|
| Thailand AI Act | In force 1 March 2026 | Risk-based, EU-style | Local representative, registration, ISO/IEC 42001 alignmentโฆ |
| South Korea AI Framework Act | In force, enforcement ramping | Risk-based, watermarking | Watermark obligations, content transparency |
| Japan AI Promotion Act | In force, soft-touch | Innovation-first, voluntary | Sectoral guidance, narrow duties |
| India April 2026 AI Labelling | Enforced since April 2026 | Disclosure and provenance | Labelling, deepfake disclosure |
| Singapore AI Verify | Voluntary assurance toolkit | Technical assurance | Evidence-based testing |
| Vietnam draft AI law | Consultation phase | Risk-based, sovereignty-flavoured | Possible local model mandates |
The practical insight is that Thailand is now the jurisdiction in the ASEAN bloc most likely to issue formal enforcement actions against high-risk AI within the next 12 months. Singapore is exporting assurance frameworks rather than statutes, as we covered this week.
India's labelling regime is live but sits on top of the IT Rules rather than a standalone AI law. Japan continues to prefer innovation-first soft law, and Korea is pushing content provenance over system-level obligations, leaving Thailand as the most immediate operational risk for foreign AI vendors.
What Asian Product Teams Should Do This Month
Thailand is unusual in combining a relatively light touch on generative AIโฆ chatbots with a heavy touch on anything that makes a consequential decision about a person. Hiring platforms, credit engines, medical triage tools, insurance underwriting systems, and judicial support tools are all squarely in scope. Retail recommendation engines, marketing copy generators, and internal productivity assistants are not.
Three actions are due now. First, appoint a Thailand-based representative entity that can handle registration and incident correspondence. Second, map every customer-facing AI feature to the three-tier risk classification and document the reasoning, because ETDA's first enforcement waves will target undocumented high-risk deployments, per the Baker McKenzie public PDF. Third, align the risk management programme to ISO/IEC 42001:2023, because that will be the default evidence standard for conformity assessments.
Frequently Asked Questions
Do foreign AI providers really need a Thai representative?
Yes, once the product touches high-risk use cases. The AI Act requires a local representative for registration, incident reporting, and enforcement correspondence, and ETDA has signalled it will not accept cross-border proxies for these duties.
What counts as a high-risk AI system under the Thai Act?
The Draft Royal Decree lists applications in justice, credit decisioning, hiring, healthcare, education, and critical infrastructure. Systems whose decisions materially affect individual rights or access to services fall into this category, and ISO/IEC 42001:2023-aligned risk management is the expected baseline.
How does the Thai AI Act interact with the existing PDPA?
The two regimes run in parallel. High-risk AI systems that process personal data trigger both the AI Act's registration and risk management obligations and the PDPA's Data Protection Impact Assessment and lawful basis requirements. The PDPC's February 2026 guidelines tie the two together.
Is there still time to shape the high-risk rules?
Partly. The consultation window on the Draft Royal Decree and Draft Prime Ministerial Notification is the formal channel for input, and legal firms with Bangkok presence are coordinating industry submissions. Expect final versions to take effect in the second half of 2026.
Will other ASEAN countries follow Thailand's lead?
Indirectly, yes. Vietnam's draft AI law borrows structurally, and Malaysia and Indonesia are watching Thai enforcement outcomes before committing to their own binding frameworks. Thailand's willingness to enforce early will be the variable that decides whether the ASEAN cluster lands on a risk-based model or retreats to sectoral rules.
Thailand just put Asian AI policy into operational mode, and the compliance clock is running. Are you already registered, or are you still negotiating with your legal team? Drop your take in the comments below.








No comments yet. Be the first to share your thoughts!
Leave a Comment