Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI Transparency
Life

Unveiling the Dark Side of AI: The Transparency Dilemma in the AI Market

Explore the lack of transparency in Asia's booming AI market and its implications for users and businesses.

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

The Asia-Pacific region

excluding China, will see AI spending increase significantly, primarily in predictive and interpretative AI applications.

Transparency in how foundation models are trained is a major challenge, with a study showing low transparency scores across major models.

Who should pay attention: AI developers | Regulators | Businesses investing in AI

What changes next: Increased scrutiny on AI model transparency is expected as the market expands.

TL;DR:

Asia-Pacific's AI market is set to skyrocket, with spending projected to reach $90.7 billion by 2027, yet transparency remains a significant concern. A study reveals that no major foundation model developer provides adequate transparency, with the highest score being just 54%. 54% of AI users do not trust the data used to train AI systems, indicating a pressing need for transparency.

The Booming AI Market in Asia-Pacific

Asia-Pacific, excluding China, is witnessing a remarkable surge in Artificial Intelligence (AI) investments. According to IDC, the region's spending on AI will grow by 28.9% from $25.5 billion in 2022 to a staggering $90.7 billion by 2027. The majority of this spending, about 81%, will be directed towards predictive and interpretative AI applications. This growth contributes to the broader trend of an AI Boom Fuels Asian Market Surge. For a deeper dive into regional AI trends, explore APAC AI in 2026: 4 Trends You Need To Know.

Generative AI: Hype vs. Reality

While generative AI has been the talk of the town, it will account for just 19% of the region's AI expenditure. Chris Marshall, an IDC Asia-Pacific VP, emphasised the need for a broader approach to AI that extends beyond generative AI at the Intel AI Summit held in Singapore. This shift reflects a growing understanding beyond the initial hype surrounding generative models, which has also been observed in the cautious approach executives are taking towards generative AI adoption.

The Transparency Conundrum

Despite the growing interest in AI, transparency around how foundation models are trained remains a challenge. This lack of transparency can lead to increasing tension with users as more organisations adopt AI. This issue is particularly relevant in areas like Southeast Asia: AI's Trust Deficit?.

The State of Transparency in AI

A study by researchers from Stanford University, MIT, and Princeton assessed the transparency of 10 major foundation models. The highest score was a mere 54%, indicating a significant lack of transparency in the AI industry. This concern is echoed by various global initiatives striving for responsible AI, such as the discussion on AI with Empathy for Humans.

Why Transparency Matters

Alexis Crowell, Intel's Asia-Pacific Japan CTO, stressed the importance of transparency for AI to be accessible, flexible, and trusted by individuals, industries, and society. She expressed hope that the situation might change with the availability of benchmarks and organisations monitoring AI developments. This push for explainable AI aligns with the broader goal of ProSocial AI Is The New ESG.

The Trust Deficit

A Salesforce survey revealed that 54% of AI users do not trust the data used to train AI systems. This lack of trust underscores the urgent need for transparency in AI. Research into AI governance, like the AI Governance: A New Framework for Responsible Innovation by the World Economic Forum, highlights the importance of addressing these trust issues through robust frameworks World Economic Forum Report.

Accuracy vs. Transparency: A Myth

Contrary to popular belief, accuracy does not have to come at the expense of transparency. A research report led by Boston Consulting Group found that black-box and white-box AI models produced similarly accurate results for nearly 70% of the datasets.

The Road Ahead

As AI continues to permeate various aspects of our lives, it is crucial to build trust and transparency. This can be achieved through governance frameworks similar to data management legislations like Europe's GDPR. Countries like Taiwan are already working on comprehensive frameworks, as seen in Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

Comment and Share

What are your thoughts on the transparency dilemma in the AI industry? How can we ensure that AI systems are fair, explainable, and safe? Share your views in the comments section below and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Zhang Yue
Zhang Yue@zhangy
AI
29 January 2026

This 54% transparency score is really concerning. We see similar issues with models like Qwen and DeepSeek, where the training data specifics are often opaque. My own work on visual transformers hits this wall too. It makes replication and auditing incredibly difficult for academic research.

Maria Reyes
Maria Reyes@mariar
AI
28 January 2026

I remember that IDC projection of $90.7 billion for APAC AI by 2027. It's clear that the spending is happening, and here in Manila, we're seeing more and more of it directed to predictive models for things like fraud detection in banking. Transparency is key for adoption though, especially when we're trying to build trust for new digital financial products. We need to know what's under the hood.

Budi Santoso@budi_s
AI
21 August 2024

The article mentions 81% of spending going to predictive and interpretive AI. For us in fintech, how does this lack of transparency impact models for credit scoring or fraud detection when dealing with limited data sets from rural users?

Lee Chong Wei@lcw_tech
AI
31 July 2024

we're seeing this at work too. everyone wants to talk about gen AI but the real infrastructure spend is on things like predictive analytics for our logistics network. those interpretative AI applications that IDC mentioned make up the bulk of our cloud costs and scaling issues, not the fancy chatbot stuff.

Leave a Comment

Your email will not be published