Three Months In, Korea Is Already Rewriting Its Own AI Rulebook
South Korea's AI Basic Act took effect on January 22, 2026, following the Enforcement Decree's publication on January 21. By April, less than three months into enforcement, MSIT was already in active calibration mode, responding to feedback from Korean firms and foreign counsel. That is the most important signal for multinationals: Korea's first comprehensive AI law is not a finished text, it is a live document.
The 10^26 FLOPS Line Is The Most Consequential Detail
Under Article 31 of the Enforcement Decree, AI systems trained above the 10^26 FLOPS computeโฆ threshold trigger enhanced obligations: risk assessment, mandatory user protection measures, and heightened supervisory attention from MSIT. Below that threshold, operators face only advance notice duties. That single number has become the dividing line between systems that need a Korea-specific compliance plan and those that do not.
By The Numbers
- January 22, 2026: AI Basic Act in force, following December 27, 2024 passage and January 21, 2026 Enforcement Decree publication.
- 10^26 FLOPS training compute threshold for classification as high-impact AI.
- KRW 30 million (approximately USD 20,000) maximum administrative fine for violations such as failure to notify users of AI use or failure to appoint a domestic representative.
- 12 high-impact sectors named: energy, drinking water, healthcare, medical devices, nuclear, biometrics, employment, credit evaluation, transportation, public services, student evaluation, plus one additional category.
- November 12, 2025: MSIT notice; public comment period closed December 22, 2025.
- Less than 3 months: time from enforcement start to the first round of clarifying refinements, per Korea Tech Desk reporting.
Extraterritorial Reach Is The Real Teeth
The fines themselves are modest by EU standards. A KRW 30 million cap is not going to reshape big-tech budget planning. But the extraterritorial application of the Act is meaningful, because it applies to AI systems whose outputs affect Korean users, regardless of where the provider is located. That means US and Chinese model providers that target Korean users either directly or through Korean resellers must appoint a domestic representative or face administrative penalties.
Cooley's analysis has been particularly clear on this point. The representative requirement is functionally a gatekeeper, because Korean regulators now have a known individual to serve notices on, pursue corrective orders against, and compel to produce compliance documentation.
We have aimed at minimum regulations to build public trust while supporting domestic AI competitiveness.
The Transparency Obligations That Matter In Practice
Article 31 transparency obligations are where most Korean and foreign businesses are now focusing operational effort. The requirements include advance user notice when interacting with AI, clear labelling of generative AIโฆ output, and deepfake labelling. This is the piece of the law that most closely mirrors what the EU and now India are implementing, and it is also the easiest piece to fail quietly without noticing until a complaint lands.
What Multinationals Should Do Right Now
Four concrete moves are appropriate for multinationals with any Korean user footprint.
- Appoint a Korean representative. Not appointing one is itself a violation. This is the single lowest-effort compliance step and among the highest-risk failures.
- Classify your models by FLOPS. If you train or significantly fine-tune above 10^26 FLOPS, expect to be treated as a high-impact operator regardless of industry.
- Ship AI labelling. Advance user notice and generative AI output labels should be visible in Korean user interfaces, not just in English-language documentation.
- Build a complaints channel. Korean regulators expect operators to handle user complaints on AI-related matters, and they expect evidence that the channel works.
Comparison To Japan And The EU
Korea's position sits between Japan and the EU in the regulatory spectrum.
Japan's AI Promotion Act has no fines, no bans, and no mandatory labelling, relying on voluntary guidelines. Korea's AI Basic Act has modest fines, no bans, and mandatory labelling for generative AI.
The EU AI Act has large fines, prohibited use categories, and a full risk tier framework with penalties up to EUR 35 million.
For US and Chinese providers, the practical implication is that Korea-grade compliance is now the Asian baseline if you want portability.
The Sectors Most Exposed In Korea
- Healthcare AI (medical devices, triage, diagnostic imaging) under the high-impact list.
- Financial services AI, particularly credit evaluation and fraud detection.
- Employment and HR technology, including candidate screening.
- Education AI, including student evaluation and adaptive learning.
- Public sector AI and transport automation.
The Competitiveness Balance
MSIT's public posture has been deliberate: minimum regulation for domestic competitiveness, serious transparency and user protection, and active calibration. The department has been quick to publish clarifications when domestic AI firms flagged practical issues, which is different from the more static EU enforcement model. For Korean firms like Samsung, LG AI Research, Naver, and Kakao, that calibration posture is why the Act's fines have not slowed commercial deployment. For foreign providers, the calibration posture is less helpful, because ambiguous rules tend to tighten over time as case law develops.
Frequently Asked Questions
What is the AI Basic Act's maximum fine?
The maximum administrative fine under Korea's AI Basic Act is KRW 30 million, approximately USD 20,000, per violation of obligations such as user notification, domestic representative appointment, or compliance with corrective orders.
Does the AI Basic Act apply to foreign AI providers?
Yes. The Act applies extraterritorially to AI systems whose outputs affect Korean users, regardless of where the provider is located. Foreign providers must appoint a domestic representative if they do not have a Korean legal entity.
What is the 10^26 FLOPS threshold?
10^26 FLOPS is the training compute threshold set in the Enforcement Decree. AI systems trained above this level are classified as high-impact and face additional risk assessment and user protection obligations. Systems below the threshold face lighter notification duties.
Is Korea still refining the AI Basic Act?
Yes. Within three months of enforcement, MSIT entered what Korean tech media has described as a calibration phase, producing clarifying guidance based on feedback from domestic and foreign operators. Expect the operational rulebook to keep evolving through 2026.
Closing
Korea's AI Basic Act is already becoming the Asian compliance floor. Has your multinational compliance plan been updated for the 10^26 FLOPS threshold, domestic representative rules, and Article 31 labelling duties? Drop your take in the comments below.








No comments yet. Be the first to share your thoughts!
Leave a Comment