Skip to main content
AI in Asia
Vietnam, Korea Shape Asia's AI Regulatory Future
Policy

Vietnam, Korea Shape Asia's AI Regulatory Future

Vietnam's AI Law and Korea's Framework Act set standards. Two regulatory models shaping APAC compliance through 2027 and beyond.

· Updated Apr 27, 2026 9 min read

Vietnam and South Korea Are Writing the Rulebook for Asian AI Regulation

The Asia-Pacific region is no longer debating whether to regulate AI. It is now enforcing regulation. Vietnam became the first Southeast Asian nation to implement a comprehensive standalone AI law on March 1, 2026. South Korea's Framework Act on Artificial Intelligence entered into force on January 22, 2026. Together, these two jurisdictions are defining the governance model that will shape how every Asian enterprise, government, and startup manages AI in the next decade. Neither is perfect, but both signal the end of the era of experimentation.

The stakes are enormous. Enterprise AI deployment across APAC hinges on regulatory clarity. Multinationals operating across Singapore, Tokyo, Seoul, Bangkok, and Ho Chi Minh City are now navigating overlapping compliance regimes. The question is not whether Asia will regulate AI, it is whether that regulation will be fragmented into 20 incompatible frameworks or converged into a loosely aligned regional standard. Vietnam and South Korea are setting the tone.

Vietnam, Korea Shape Asia's AI Regulatory Future

Vietnam's Risk-Based First Mover Advantage

Vietnam's Law on Artificial Intelligence was passed on December 10, 2025, and took effect on March 1, 2026. It is the first comprehensive, legally binding AI framework in Southeast Asia. The law adopts a risk-based classification system: high-risk AI systems (those used in finance, healthcare, education, and law enforcement) face mandatory impact assessments, human oversight requirements, and regular audits. Medium-risk systems require transparency and documentation. Low-risk systems face minimal restrictions.

The genius of Vietnam's approach is pragmatism. Rather than banning high-risk applications outright, the law permits them under rigorous conditions. It mandates human oversight for generative AI systems, requiring clear labelling of AI-generated content, bans on automated deepfakes used for fraud or disinformation, and state oversight on how algorithms are deployed in public services. Yet it provides grace periods: legacy systems in healthcare, education, and finance have until September 2027 to comply. Financial institutions have until March 2027. This gradualism signals to businesses that compliance is expected but not punitive if approached in good faith.

Enforcement starts immediately, however. Vietnamese regulators have appointed an AI Governance Council responsible for monitoring compliance, issuing guidance, and investigating complaints. Large technology companies operating in Vietnam, including Chinese platforms, domestic fintechs, and multinational cloud providers, must now ensure that their AI systems meet the law's requirements or face suspension of services.

For Southeast Asia, Vietnam's move is transformative. It establishes a precedent that comprehensive AI regulation is achievable without descending into heavy-handed restrictions. Indonesia, Thailand, and the Philippines are all watching closely. The initial indications are that Indonesia and Thailand will model their frameworks on Vietnam's risk-based approach rather than pursuing harder bans. If so, Vietnam has just set a regional standard that competitors will converge toward, creating a unified ASEAN AI governance model by 2027.

South Korea's Multipart Enforcement Architecture

South Korea's approach is more comprehensive and more prescriptive. The Framework Act on Artificial Intelligence Development and Establishment of Trust, which entered into force on January 22, 2026, creates a multi-layered enforcement structure. We covered the first three months of refinement to Korea's AI Basic Act earlier this month, including its growing extraterritorial reach over multinationals. At the top sits a National AI Committee, chaired by the Prime Minister, responsible for developing a three-year AI development and utilisation plan. Below that sits the AI Safety Research Institute, which will conduct impact assessments and develop sector-specific safety guidelines for high-risk applications.

The act covers AI systems used in critical infrastructure, public administration, and consumer services. It requires foreign AI providers to appoint local representatives within Korea and to disclose their algorithms' key decision factors to users. It mandates transparency in generative AI systems, including labelling of AI-generated content. It imposes penalties for non-compliance ranging from fines to service suspension.

Critically, South Korea's law is extraterritorial. Companies outside Korea that provide AI services to Korean users must comply. This is significant because it means that a Silicon Valley chatbot company or a Chinese e-commerce platform operating in Korea must adjust their systems to meet Korean standards, or leave the market. This model is now being imitated by other APAC jurisdictions.

Enforcement began immediately upon entry into force. The Korean Ministry of Science and ICT and the Ministry of Commerce, Industry and Energy have begun issuing compliance guidance for specific sectors: finance, healthcare, employment, education. Companies are expected to conduct AI impact assessments, establish governance boards, and report on their compliance efforts. First-mover firms that engage proactively with regulators are being offered grace periods; those that ignore the law face fines.

The Regional Regulatory Cascade

The impact of Vietnam and South Korea's moves is already visible. Taiwan's AI Basic Act is now in its second quarter of enforcement and is being refined based on Korea's model. Japan's Act on Promotion of AI Research and Korea's AI Basic Act now sit as the two contrasting Asian regulatory models, with Japan favouring soft law and Korea taking the binding route. Australia's AI Safety Institute, which commenced operations in early 2026 with AUS$29.9 million in funding, is modelling its governance review process on Korea's AI impact assessment requirements.

Indonesia, which had expected to release Presidential regulations on AI in early 2026, is now delaying to study the implications of Vietnam's law, the same dynamic we tracked in our analysis of ASEAN AI governance moving beyond the Singapore-Philippines chairship. Thailand's ETDA, which was revising its AI principles post-2025 consultation, has widened the consultation to include lessons from Korea and Vietnam. The Philippines, currently holding the ASEAN chair, is proposing a regional AI governance council to converge regulatory frameworks, a model based explicitly on Vietnam's and Korea's approaches.

The pattern is clear: Asia is not converging on a single standard, but rather on a shared philosophy. Regulation must be risk-based, not one-size-fits-all. High-risk systems require human oversight and transparency. Regulators must have clear enforcement mechanisms and authority to impose penalties. And foreign providers must meet local standards or exit the market.

Jurisdiction Law / Framework Entry Into Force Scope
South Korea Framework Act on AI Development and Trust January 22, 2026 High-impact systems; generative AI; extraterritorial
Vietnam Law on Artificial Intelligence March 1, 2026 Risk-based; high/medium/low-risk classification; grace periods
Taiwan AI Basic Act (with refinements) October 2025 (ongoing updates) High-risk systems; transparency; government oversight
Japan Act on Promotion of AI R&D + Guidelines June 2025 + ongoing sector-specific guidance Soft-law approach; business guidelines; government investigations
Australia AI Safety Institute (via National AI Plan) Early 2026; AUS$29.9M funding Risk assessment; safety evaluation; international collaboration

What This Means for Global AI Companies and Asian Enterprises

For multinational AI companies, OpenAI, Google, Microsoft, Alibaba, ByteDance, the implications are clear. There is no longer a single "Asia-Pacific" market. Instead, there are five distinct regulatory regimes, each with its own compliance timelines, oversight structures, and penalty mechanisms. A product that works in Singapore may require modification for Korea, redesign for Vietnam, and further adjustments for Japan. Engineering teams must now account for regulatory variance as part of product development.

For Asian enterprises, the good news is clarity. The guesswork is over. Organisations deploying AI systems in finance, healthcare, or government services now have explicit regulatory requirements. The challenge is execution: engineering teams must redesign systems to meet transparency, human-oversight, and auditability requirements. This will slow product development timelines but reduce long-term legal risk.

The critical date is March 2027, when Vietnam's grace period expires for finance and healthcare systems. By that date, the region will have a clear picture of how aggressive enforcement is and which companies have prioritised compliance. By September 2027, all legacy systems in Vietnam must be compliant. If enforcement is consistent with South Korea's approach, significant fines and market restrictions are likely for non-compliant firms.

The AIinASIA View: Asia's regulatory moment is here, and it is not what Western observers expected. Neither a Silicon Valley free-for-all nor a China-style authoritarian control, but rather pragmatic, risk-based governance that respects innovation while demanding accountability. Vietnam and South Korea have set a high bar: transparent algorithms, human oversight for high-risk systems, and enforcement with teeth. We expect convergence across the region by 2027, and we expect multinationals and Asian enterprises to spend the next 12 months in intensive compliance engineering. This is not a problem to delay, it is an opportunity to build trust.

Frequently Asked Questions

What happens to a company that fails to comply with Vietnam's AI law after September 2027?

Non-compliant systems can be suspended or banned from operation in Vietnam. Regulators can impose fines and, in severe cases, restrict the provider's ability to operate in Vietnam. Companies are already reporting that compliance engineering now spans 12-18 months, so the prudent approach is to begin work immediately.

Does South Korea's extraterritorial reach mean foreign companies must comply with Korean law?

Yes, but with a carve-out for de minimis operations. If a company serves Korean users or collects data on Korean residents, it must meet Korean standards or withdraw from the market. However, companies that serve Korea incidentally (e.g., a global LLM used by a tiny fraction of Korean users) have some flexibility in compliance interpretations. The enforcement guidance is still being refined.

Is Vietnam's grace period a sign of weakness or pragmatism?

Pragmatism. Vietnam's regulators recognise that retrofitting legacy systems takes time, and penalising firms for non-compliance during a reasonable transition window would damage investment incentives. The grace period signals: comply in good faith, and you will not face punitive enforcement. Ignore the law after the deadline, and you will face consequences.

How does Australia's AI Safety Institute differ from Vietnam's and Korea's approaches?

Australia is pursuing a lighter-touch model based on voluntary risk assessment and government evaluation, rather than mandatory compliance regimes. However, Australia's framework is converging toward the risk-based model adopted by Korea and Vietnam. Over the next two years, expect Australia to introduce more binding requirements.

Will ASEAN adopt a unified AI governance framework by 2027?

Unlikely a single unified law, but probable a convergent standard. Vietnam has set the precedent; Indonesia, Thailand, and Malaysia are expected to adopt similar risk-based models. The Philippines, as current ASEAN chair, is proposing a regional governance council to harmonise standards. Full convergence may take until 2028-2029.

Should a multinational prioritise Korea or Vietnam compliance first?

South Korea first. Its enforcement machinery is operational, its penalties are clear, and its extraterritorial reach means non-compliance has immediate consequences. Vietnam's grace periods provide more flexibility, but that window closes in 2027. A two-phase approach, Korea compliance by Q4 2026, Vietnam by Q2 2027, is standard practice.

By The Numbers

March 1, 2026
Vietnam AI Law entry into force

Vietnam became the first Southeast Asian nation to implement a comprehensive standalone AI law, passed December 10, 2025, taking effect March 1, 2026.

Read more →
January 22, 2026
Korea AI Basic Act enforcement

South Korea's Framework Act on Artificial Intelligence Development and Establishment of Trust entered into force on January 22, 2026, creating a multipart enforcement architecture.

Read more →
18-month grace period
Vietnam financial system compliance

Vietnam's law grants financial institutions until March 2027 and healthcare/education systems until September 2027 to achieve full compliance with high-risk AI requirements.

Read more →
AUS$29.9 million
Australia AI Safety Institute funding

Australia's National AI Plan committed AUS$29.9 million to the Australian AI Safety Institute, operational in early 2026, modelling governance reviews on Korea's impact assessments.

Read more →
5 jurisdictions
Aligned regulatory frameworks

South Korea, Vietnam, Taiwan, Japan, and Australia are converging on risk-based AI governance models with similar transparency and oversight requirements by 2027.

Read more →
Extraterritorial reach
Korea's enforcement scope

South Korea's Framework Act applies to foreign AI providers serving Korean users, mandating local representation and algorithm disclosure or market exit.

Read more →