Life

Unveiling the Dark Side of AI: The Transparency Dilemma in the AI Market

Explore the lack of transparency in Asia’s booming AI market and its implications for users and businesses.

Published

on

TL;DR:

  1. Asia-Pacific’s AI market is set to skyrocket, with spending projected to reach $90.7 billion by 2027, yet transparency remains a significant concern.
  2. A study reveals that no major foundation model developer provides adequate transparency, with the highest score being just 54%.
  3. 54% of AI users do not trust the data used to train AI systems, indicating a pressing need for transparency.

The Booming AI Market in Asia-Pacific

Asia-Pacific, excluding China, is witnessing a remarkable surge in Artificial Intelligence (AI) investments. According to IDC, the region’s spending on AI will grow by 28.9% from $25.5 billion in 2022 to a staggering $90.7 billion by 2027. The majority of this spending, about 81%, will be directed towards predictive and interpretative AI applications.

Generative AI: Hype vs. Reality

While generative AI has been the talk of the town, it will account for just 19% of the region’s AI expenditure. Chris Marshall, an IDC Asia-Pacific VP, emphasised the need for a broader approach to AI that extends beyond generative AI at the Intel AI Summit held in Singapore.

The Transparency Conundrum

Despite the growing interest in AI, transparency around how foundation models are trained remains a challenge. This lack of transparency can lead to increasing tension with users as more organisations adopt AI.

The State of Transparency in AI

A study by researchers from Stanford University, MIT, and Princeton assessed the transparency of 10 major foundation models. The highest score was a mere 54%, indicating a significant lack of transparency in the AI industry.

Advertisement

Why Transparency Matters

Alexis Crowell, Intel’s Asia-Pacific Japan CTO, stressed the importance of transparency for AI to be accessible, flexible, and trusted by individuals, industries, and society. She expressed hope that the situation might change with the availability of benchmarks and organisations monitoring AI developments.

The Trust Deficit

A Salesforce survey revealed that 54% of AI users do not trust the data used to train AI systems. This lack of trust underscores the urgent need for transparency in AI.

Accuracy vs. Transparency: A Myth

Contrary to popular belief, accuracy does not have to come at the expense of transparency. A research report led by Boston Consulting Group found that black-box and white-box AI models produced similarly accurate results for nearly 70% of the datasets.

The Road Ahead

As AI continues to permeate various aspects of our lives, it is crucial to build trust and transparency. This can be achieved through governance frameworks similar to data management legislations like Europe’s GDPR.

Comment and Share

What are your thoughts on the transparency dilemma in the AI industry? How can we ensure that AI systems are fair, explainable, and safe? Share your views in the comments section below and don’t forget to subscribe for updates on AI and AGI developments.

Advertisement

You may also like:

Singaporeans Have Trust Issues Around How Companies are Using AI

Two-Faced AI: Hidden Deceptions and the Struggle to Untangle Them

YouTube’s New AI Disclosure Policy: A Step Towards Transparency in Asia’s AGI Landscape

To learn more about AI issues tap here.

Advertisement

Trending

Exit mobile version