The Governance Gap Is Killing Enterprise AI in Asia
Companies across Asia-Pacific are spending more on artificial intelligence than ever before. Budgets are up 15% year on year. Boards are demanding returns. And yet roughly half of all enterprise AI proofs of concept in the region never make it past the pilot stage.
That is the uncomfortable finding from Lenovo's CIO Playbook 2026, released this month, which surveyed IT leaders across the Asia-Pacific. The report paints a picture of a region that is enthusiastic about AI but struggling to convert that enthusiasm into working systems at scaleโฆ.
This pattern aligns with broader regional investment trends where enthusiasm doesn't always translate to production success. The numbers tell a contradictory story that mirrors challenges seen across emerging markets.
Billions In, Half Wasted
Some 96% of organisations surveyed plan to increase AI investment over the next 12 months. They expect a return of US$2.85 for every dollar spent. But only 10% describe themselves as ready for large-scale deployment of agenticโฆ AI, the next wave of autonomous AI systems that can plan, reason, and act without constant human direction.
Another 60% say they are "exploring" agentic AI in limited deployments. And 41% admit it will take more than a year before they see meaningful results at scale. The bottleneck is not the technology. It is everything around it.
"Selecting the wrong model for a task rapidly depletes budgets within two quarters." , Art Hu, Senior Vice President Global CIO and Chief Delivery and Technology Officer for SSG, Lenovo
By The Numbers
- ~50%: Share of Asia-Pacific enterprise AI proofs of concept that never reach production
- 96%: APAC organisations planning to increase AI investment in the next 12 months
- US$2.85: Expected return for every dollar invested in enterprise AI across the region
- 10%: Organisations that consider themselves ready for scaled agentic AI deployment
- 15x: How much inferenceโฆ costs can exceed initial training costs over a model's lifecycle
Governance, Not GPUs, Is the Real Problem
The Lenovo report identifies governance as the primary obstacle, not computing power or talent. Only one in three Asia-Pacific organisations currently has a comprehensive AI governanceโฆ framework in place. That matters because without clear rules on data handling, model accountability, and risk management, pilots stall in compliance review and never get the green light for production.
Gordon Orr, a Lenovo board director and former McKinsey Asia chairman, put it bluntly. Board members are already facing legal scrutiny over AI decisions. Governance is not an optional compliance exercise. It is a requirement for any organisation that wants to deploy AI at scale without exposing its leadership to personal liability.
"Board members have already faced legal scrutiny over AI decisions, making governance a requirement rather than optional compliance." , Gordon Orr, Board Director, Lenovo, and Former Asia Chairman, McKinsey
This aligns with separate findings from Gartner, which projects that more than 40% of all agentic AI projects globally will fail by 2027, driven by runaway costs, unclear business value, and agents that behave in ways that violate internal policy.
The Hidden Cost Trap
One of the least understood risks is the cost of inference. Training a large model gets the headlines and the budget approvals. But running that model in production, responding to queries, making predictions, processing transactions, is where the real expense lives.
According to the Lenovo report, inference costs can run up to 15 times the initial training cost over a model's operational lifecycle. Most organisations did not account for this in their original business cases, meaning projects that looked financially viable at the pilot stage become unsustainable at scale.
This helps explain why 86% of Asia-Pacific organisations now incorporate on-premises or edge computing environments alongside cloud in their AI infrastructure. In Southeast Asia specifically, 81% prefer hybrid models. Running inference workloads closer to the data source cuts latency and, critically, reduces the recurring cloud bills that compound month after month.
Southeast Asia's Uneven Track Record
The failure rates vary sharply across the region. Research from Pertama Partners puts the overall AI project failure rate in Southeast Asia at 77.2%, slightly better than the global average of 80.3% but with wide variation between countries.
Singapore leads with a 71.4% failure rate, benefiting from stronger government AI guidance, a deeper talent pool, and a higher concentration of digital-native companies. Malaysia sits at 78.9%, Thailand at 79.6%, Indonesia at 82.1%, the Philippines at 83.4%, and Vietnam at 84.7%.
| Country | AI Project Failure Rate | Key Factor |
|---|---|---|
| Singapore | 71.4% | Government AI initiatives, talent concentration |
| Malaysia | 78.9% | Growing data centre hub, emerging governance |
| Thailand | 79.6% | Digital transformationโฆ push, infrastructure gaps |
| Indonesia | 82.1% | Early-stage funding dependency, compliance complexity |
| Philippines | 83.4% | BPO sector AI integration challenges |
| Vietnam | 84.7% | New AI law, nascent governance frameworks |
The pattern is clear. Countries with stronger governance infrastructure and government-led AI frameworks see meaningfully better outcomes. As Singapore's SME experience shows, even mature markets struggle when governance lags behind ambition.
What Separates the 10% That Scale
The minority of organisations that do reach production share several characteristics:
- They treat AI governance as a first-quarter priority, not a post-deployment afterthought
- They budget for the full lifecycle, including inference costs, monitoring, and model updates
- They start with hybrid infrastructure rather than betting entirely on cloud
- They measure success on business outcomes, not model accuracy metrics
- They have board-level accountability for AI decisions from day one
- They invest in change management alongside technical deployment
- They pilot with specific business problems rather than generic use cases
Deloitte Australia's 2026 State of AI in the Enterprise report reinforces this. Only 30% of Australian organisations are using AI to deeply transform their ways of working, compared with 34% globally. For most, AI remains an automation layer rather than a strategic capability.
IDC's FutureScape 2026 predicts that by 2028, CIOs across Asia-Pacific will increase spending on sovereign-ready cloud and data localisation by 50% just to stay compliant, a cost that most current AI budgets do not account for. The implications extend beyond individual companies to regional competitive positioning.
Why do most AI pilots fail in Asia-Pacific?
The primary cause is governance failure, not technical issues. Without clear frameworks for data handling, model accountability, and risk management, pilots stall in compliance reviews. Only one in three organisations has comprehensive AI governance in place.
How much do companies expect to earn from AI investments?
Asia-Pacific organisations expect a return of US$2.85 for every dollar spent on AI. However, most underestimate inference costs, which can run 15 times higher than initial training expenses over a model's operational lifecycle.
Which countries have the best AI deployment success rates?
Singapore leads Southeast Asia with a 71.4% failure rate, followed by Malaysia at 78.9%. Countries with stronger government AI frameworks and governance infrastructure consistently outperform those without clear regulatory guidance.
What makes the 10% of successful AI deployments different?
Successful organisations treat governance as a first-quarter priority, budget for full lifecycle costs, use hybrid infrastructure, measure business outcomes rather than technical metrics, and establish board-level accountability from day one.
How will compliance costs affect future AI budgets?
IDC predicts CIOs will increase spending on sovereign-ready cloud and data localisation by 50% by 2028. This represents a significant cost that most current AI budgets haven't factored in, potentially derailing projects that appear financially viable today.
The gap between AI enthusiasm and production success across Asia-Pacific reveals a fundamental truth: technology is never the hardest part of digital transformation. As more organisations discover that massive investment doesn't guarantee deployment success, the focus is shifting from what's possible to what's sustainable.
Between now and 2030, CIOs will be judged not on how many AI experiments they launch, but on how many they can operationalise securely, affordably, and compliantly. The half that never make it to production aren't failing because they picked the wrong model. They're failing because they built their AI strategy on quicksand instead of solid governance foundations. What governance challenges is your organisation facing as it moves from AI pilots to production? Drop your take in the comments below.







No comments yet. Be the first to share your thoughts!
Leave a Comment