Singapore Writes the World's First Agentic AI Rulebook
AI agents are no longer hypothetical. They book flights, approve invoices, triage patient records, and negotiate supplier contracts. They act autonomously. And until January 2026, no government on earth had published formal rules for how they should behave.
Then Singapore did something unprecedented. On 22 January, Minister Josephine Teo announced the Model AI Governance Framework for Agentic AI at the World Economic Forum in Davos. This made Singapore the first nation to issue formal guidance on governing AI systems that operate independently, building on the country's track record of pioneering AI governance approaches.
The timing matters. As agentic AI systems become mainstream across enterprise deployments, the governance vacuum has left companies scrambling for direction.
Four Pillars of Autonomous AI Accountability
The framework, developed by the Infocomm Media Development Authority (IMDA), rests on four foundational pillars: risk assessment, human oversight, technical controls, and user responsibility. It is voluntary guidance, not legislation, but carries weight because Singapore has consistently turned soft governance into regional norms.
The core principle is unambiguous accountability. When an AI agent books the wrong hotel, sends inappropriate emails, or makes medical recommendations that harm patients, someone must be responsible. The framework establishes that someone is always human, never the agent itself.
"The framework fills a critical gap in policy guidance for agentic AI by establishing foundational principles for assurance and risk mitigation." - April Chin, Co-Chief Executive Officer, Resaro
This distinction matters because agentic AI operates fundamentally differently from conversational chatbots. While chatbots wait for instructions and respond, agents reason, plan, use tools, access databases, and chain multiple actions together without seeking permission at every step. The governance challenge scales with this autonomy.
Building Momentum Through Strategic Positioning
Singapore has been constructing this regulatory foundation for years. Its original Model AI Governance Framework launched in 2019, followed by the AI Verify testing toolkit, development resources for large language model applications, and establishment of an AI Safety Institute. The country also leads the ASEAN Working Group on AI Governance, amplifying this framework's influence across Southeast Asia.
Prime Minister Lawrence Wong reinforced this direction in his February Budget Speech, announcing a National AI Council and national AI Missions targeting advanced manufacturing, finance, and healthcare. Government data shows over 60 firms have established AI Centres of Excellence in Singapore, creating a substantial implementation base.
However, ambition outpaces readiness. A Deloitte report published in February found only 14% of Singapore leaders maintain mature models for agentic AI governance, trailing the global average of 21%. Half rely on patchwork combinations of public and internal frameworks for risk assessment, highlighting the gap this new guidance aims to fill.
By The Numbers
- 14%: Share of Singapore leaders with mature agentic AI governance models, per Deloitte
- 40%: Enterprise applications expected to embed task-specific AI agents by end of 2026, up from under 5% in 2024 (Gartner)
- $10.86 billion: Projected global agentic AI market value in 2026, up from $7.55 billion in 2025
- 79%: Organisations reporting some level of agentic AI adoption globally in 2026
- 60+: Firms with AI Centres of Excellence in Singapore
Regional Governance Ripple Effects Accelerate
Singapore's governance frameworks consistently become templates across ASEAN. The 2019 framework influenced AI governance development in Thailand, the Philippines, and Vietnam. This agentic AI framework appears positioned for similar regional adoption, particularly as neighbouring countries advance their own AI regulatory approaches.
South Korea has moved on a parallel track, with its AI Basic Act entering force in January 2026, supported by the recent $300 million bilateral AI alliance announced between Seoul and Singapore. China has pursued a different path entirely, mandating embedded watermarks and encrypted metadata in AI-generated content, with software removing these watermarks now prohibited.
India updated its IT Rules in 2025 to mandate labelling and removal of AI-generated content. Each approach reflects different regulatory philosophies, but Singapore's framework stands apart in specifically addressing autonomous agent behaviour rather than just content generation.
"Open and distributed AI innovation will accelerate capability diffusion. Governance must therefore be embedded at design stage, not retrofitted post-deployment." - Emad Mostaque, AI industry commentator
Enterprise Implementation Requirements Take Shape
The framework applies broadly to organisations developing AI agents internally and those adopting third-party solutions. This scope encompasses any company deploying Salesforce Agentforce, Microsoft Copilot agents, or custom autonomous workflows within Singapore's jurisdiction.
Three practical implementation requirements emerge clearly:
- Risk assessment becomes mandatory strategic thinking, not compliance paperwork. The framework expects organisations to evaluate failure scenarios, not just success cases. This means stress-testing autonomous workflows before production deployment.
- Human oversight requires meaningful control points, not micromanagement. The framework acknowledges agents need operational autonomy to deliver value. Requirements focus on oversight at critical decision junctions rather than every individual action.
- User responsibility creates real accountability. If deployed agents cause harm, organisations own the consequences. This establishes that "the AI made the decision" will not constitute acceptable legal defence.
- Technical controls must match operational risks. Higher-risk deployments require stronger containment measures, creating tiered governance expectations based on potential impact.
These requirements align with broader challenges organisations face in scaling AI initiatives from pilot to production, where governance often becomes the bottleneck.
| Country | AI Governance Approach | Status (2026) | Primary Focus |
|---|---|---|---|
| Singapore | Model Framework for Agentic AI | Published Jan 2026 | Accountability and risk pillars |
| South Korea | AI Basic Act | In force Jan 2026 | Comprehensive AI regulation |
| China | AI Content Labelling Rules | In force Sep 2025 | Traceability and watermarks |
| India | IT Rules Amendment | Updated 2025 | Deepfake and content labelling |
| Vietnam | AI Law | Enforced 2025 | Data protection and transparency |
What makes agentic AI different from regular AI applications?
Agentic AI systems can reason, plan, and execute actions across multiple steps without human intervention at each stage. Unlike chatbots that respond to prompts, agents use tools, access databases, and chain tasks together autonomously. This operational independence creates new governance challenges around accountability and risk management.
Is Singapore's framework legally binding for companies?
No. The Model AI Governance Framework for Agentic AI provides voluntary guidance rather than legal requirements. However, Singapore historically converts voluntary frameworks into industry standards through adoption pressure and regional influence. Companies operating in the region should consider it a preview of future regulatory expectations.
Which companies need to comply with this framework?
Any organisation deploying autonomous AI agents in Singapore, whether built internally or purchased from vendors like Salesforce, Microsoft, or Google. The framework covers both developers and adopters of agentic AI systems, meaning enterprise buyers assume governance responsibility alongside technology providers.
How does this framework compare to AI regulations in other countries?
Singapore's approach specifically targets autonomous agent behaviour, while most other regulations focus on AI-generated content or general AI applications. The framework emphasises accountability principles rather than technical specifications, creating flexible guidance that can adapt to evolving technology capabilities while maintaining clear responsibility structures.
What happens if companies ignore the framework recommendations?
Currently, no direct penalties exist since the framework is voluntary. However, Singapore often uses soft governance to establish industry expectations before introducing formal regulations. Companies ignoring these guidelines may face competitive disadvantages, regulatory scrutiny, or difficulties accessing government AI initiatives and partnerships within Singapore's growing AI ecosystem.
Singapore has drawn the first governance lines around autonomous AI agents, establishing accountability frameworks that balance innovation with responsibility. As these systems become integral to enterprise operations across Asia, other governments will face pressure to develop similar guidance.
How do you think other Asian governments should approach agentic AI governance? Should they follow Singapore's voluntary framework model or pursue binding regulations from the start? Drop your take in the comments below.
YOUR TAKE
We cover the story. You tell us what it means on the ground.
What did you think?
Share your thoughts
Be the first to share your perspective on this story
This is a developing story
We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

