Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Voices

Southeast Asia Trusts Its Governments On AI More Than Anyone Else. That Is A Problem

81% of Singaporeans and 76% of Indonesians trust governments on AI. Governance capability hasn't kept pace.

Intelligence DeskIntelligence Deskโ€ขโ€ข6 min read

Southeast Asia Trusts Its Governments On AI More Than Anyone Else. That Is A Problem

Eighty-one per cent of Singaporeans trust their government to regulate AI responsibly. Seventy-six per cent of Indonesians say the same. Malaysia sits at 73%, Thailand at 70%. Those numbers, from the Stanford 2026 AI Index, make Southeast Asia the most government-trusting region in the world on AI policy. The global average is 54%. It is tempting to read that as a sign of civic health. It is not. It is a warning.

High Trust Without Capable Frameworks Is The Dangerous Combination

Governance capability has not kept pace with public trust. The same Stanford HAI 2026 AI Index, using McKinsey survey data, puts Asia-Pacific's responsible AI maturity at 2.5 on a four-point scale. That is better than North America's 2.2, but still squarely in the "integrating" phase. Most ASEAN regulators are running voluntary frameworks, testing toolkits from AI Verify, or issuing non-binding guidance. Very few have primary AI legislation in force.

The result is a population that expects protection and a regulatory apparatus that has not been built yet. When the first real AI harm hits Indonesian or Malaysian users, the response will be slower than the public thinks, and less effective than the political rhetoric suggests. High trust narrows the margin for failure, because expectations have been set, and broken trust is harder to rebuild than baseline trust is to earn.

Advertisement

By The Numbers

  • 81% of Singaporeans trust government AI regulation, 76% of Indonesians, 73% Malaysians, 70% Thais. Source: Stanford 2026 AI Index.
  • 54% is the global average for the same question. Southeast Asia exceeds it by 16 to 27 percentage points.
  • 2.5 / 4.0 is Asia-Pacific's responsible AI maturity score. Still in the "integrating" phase, per McKinsey survey data.
  • 81% of Southeast Asian companies are piloting or scaling AI, versus 63% globally. Source: McKinsey and Singapore EDB, February 2026.
  • +9 percentage points was Malaysia's year-on-year increase in AI optimism, the largest of any surveyed country. Source: Ipsos 2025-2026.

Where The Gap Will First Show Up

There are four likely pressure points in the next 18 months. Financial services first, because Asian banks are automating credit decisions and fraud detection faster than any consumer-facing sector. Health next, as government hospital networks in Thailand and Indonesia scale AI-assisted diagnostics. Content moderation third, as India's IT Rules 2026 push AI-content labelling and deepfake takedown liability across the region. And public sector AI fourth, as welfare systems, school placement tools, and traffic enforcement models go live in Singapore, Kuala Lumpur, and Manila.

In each of these, the gap between public expectation of oversight and the actual institutional capacity to audit will widen. Singapore is probably the best-prepared jurisdiction, because IMDA and MAS have been explicit about tooling-first governance. Indonesia's OJK has moved early on banking AI governance but has less to work with elsewhere. Malaysia's MyDIGITAL is building capacity, but implementation is uneven across ministries. Thailand's ETDA has issued voluntary guidelines and not much else.

The Optimism Paradox

Southeast Asia is also the most optimistic region in the world about AI. Over 80% of respondents in Malaysia, Thailand, Indonesia, and Singapore expect AI to profoundly change their lives in the next three to five years. That optimism is a strategic asset. It makes public-sector AI deployment easier, drives higher enterprise adoption, and attracts investment. But it compounds the regulatory problem: populations that are optimistic about AI are faster to grant AI systems authority in their lives, and slower to notice when those systems fail.

Across ASEAN, we see that the adoption of AI has grown much more quickly than the ability of our systems to guide it.

Piti Srisangnam, Executive Director, ASEAN Foundation, speaking at the ASEAN AI Dialogue, early 2026
Southeast Asia Trusts Its Governments On AI More Than Anyone Else. That Is A Problem

High public trust is a resource that regulators can spend or save. Spending it without building institutional capacity is a mistake.

Sana Belaidi, Partner, Access Partnership Asia, writing in a February 2026 policy brief

What Should Happen Now

The region needs three things, in this order. First, independent audit capacity: at least one accredited AI audit body in each ASEAN member state, with public reporting and the ability to issue non-binding findings that influence procurement. Second, sector-specific statutes where voluntary frameworks cannot carry the weight, starting with finance and health. Third, public-sector transparency: government AI systems that affect citizen access to services should publish model cards, data sources, and review windows by default.

  1. Create an accredited AI audit body per ASEAN member state.
  2. Legislate sector-specific AI statutes in finance and health first.
  3. Mandate model cards and data-source disclosures for all public-sector AI.
  4. Fund civil-society AI oversight groups to act as a second layer of review.
  5. Build cross-border incident reporting, ideally through the ASEAN Working Group on AI Governance.

The Singapore Question

Singapore deserves a specific note. Singaporeans' 81% trust level is the highest in the world, and IMDA has earned some of it. But Singapore's governance is built on voluntary toolkits and soft law, not primary statute. If Singapore continues to export its approach across ASEAN, the region ends up with a beautifully engineered compliance scaffolding and no binding law underneath it. That works while Asian AI deployment stays relatively benign. It stops working the first time a deployment fails in a way that requires enforcement, not attestation.

CountryGov. trust (AI)AI piloting / scalingPrimary AI law
Singapore81%75%None (voluntary)
Indonesia76%68%OJK banking guideline
Malaysia73%72%MyDIGITAL voluntary
Thailand70%65%ETDA voluntary
Korea47%82%AI Basic Act (2026)
Japan52%70%Assurance framework

This is a continuation of our earlier piece on Southeast Asian AI sovereignty framing, and it connects to Korea's AI Basic Act enforcement, which is the region's strongest statutory counter-example. For the Singapore toolkit we are worried will become the regional default, see our Model AI Governance Framework analysis. And for the underlying optimism data, see our reporting on Stanford's 2026 AI Index.

The AI in Asia View We think Southeast Asia's high government trust on AI is the most underappreciated strategic vulnerability in the region's 2026 outlook. Trust is a strategic asset only if the institution being trusted has the operational capacity to meet its obligations. Across ASEAN, it generally does not, and the gap will be exposed as soon as a meaningful AI harm hits a consumer population. We are not arguing for more scepticism; we are arguing for more capacity. The way to honour 81% trust is to build the audit infrastructure, statutes, and civil-society oversight the trust presumes you have already built.

Frequently Asked Questions

Why is Southeast Asian government trust on AI so high?

A mix of strong economic growth, visible public-sector digitalisation, and relatively positive recent experiences with government technology services (like Singapore's SingPass, Indonesia's Peduli Lindungi, and Thailand's digital ID). Citizens extend trust from those experiences to AI governance.

Is high trust actually dangerous?

Not inherently, but it raises the cost of failure. When trust is high, regulators have less room to announce partial or provisional measures. A visible AI failure is more politically consequential in a high-trust environment than in a sceptical one.

What would a "capable" AI framework look like?

Independent audit bodies, sector-specific binding statutes where voluntary tools are insufficient, mandatory public-sector transparency on AI systems affecting citizens, and funded civil-society oversight. No ASEAN jurisdiction has all four today.

Is Korea's AI Basic Act the right model?

It is the strongest statutory counter-example in the region, but its enforcement is still early. Watching how Korean enterprises interpret it during 2026 will teach ASEAN regulators more than any framework document they commission.

Advertisement

What would your country need to do to earn the trust its citizens have already extended to it on AI? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Be the first to share your perspective on this story

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path รขย†ย’

No comments yet. Be the first to share your thoughts!

Leave a Comment

Your email will not be published