Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Shadow AI at work
Business

Shadow AI at Work: A Wake-Up Call for Business Leaders

AI tools like ChatGPT are quietly reshaping how people work - but nearly half of users admit to using them in risky or inappropriate ways.

Intelligence Desk2 min read

Title: Shadow AI at Work: A Wake-Up Call for Business Leaders

Content: 58% of workers use AI at work – but nearly half (47%) admit to using it in risky or non-compliant ways, like uploading sensitive data or hiding its use. ‘Shadow AI’ is rampant, with 61% not disclosing AI use, and 66% relying on its output without verifying – leading to mistakes, compliance risks, and reputational damage. Lack of AI literacy and governance is the root cause – only 34% of companies have AI policies, and less than half of employees have received any AI training.

AI at work is booming – but too often it’s happening in the shadows, unsafely and unchecked.

The Silent Surge of Shadow AI

Almost Half Admit to Using AI Inappropriately

Uploading sensitive company or customer data into public tools like ChatGPT (48% admitted to this),Using AI against company policies (44%),Not checking AI’s output for accuracy (66%),Passing off AI-generated content as their own (55%)

The Rise of Shadow AI

61% of workers have used AI without telling anyone,66% don’t know if it’s allowed,55% claim AI output as their own work

Why Are Employees Going Rogue?

What Can Businesses Do Now?

  1. Develop clear AI policies

The development of clear AI policies is crucial for businesses navigating the rapid advancements in artificial intelligence. Countries like Taiwan are already establishing frameworks for responsible innovation, setting a precedent for global standards.

  1. Invest in AI literacy

Investing in AI literacy is key, as highlighted by a recent PwC report on AI at Work. This means training employees on how to use AI tools effectively and ethically. Understanding top AI tools and their appropriate applications can significantly reduce shadow AI risks.

  1. Foster transparency, not fear
  2. Implement oversight and accountability

A Tipping Point Moment

Final Thought

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI in Business (Asia) learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
21 February 2026

The statistic about 48% of workers uploading sensitive data to public AI tools is concerning, but perhaps not entirely surprising given the rapid adoption. In Malaysia, our national AI roadmap emphasizes not just innovation but also secure and ethical deployment, particularly for government and critical infrastructure. This kind of shadow AI usage could severely undermine public trust and data sovereignty efforts. It raises the question: beyond developing policies, how do we effectively enforce these guidelines and ensure employees understand the national and regional implications of such actions, especially when engaging with tools hosted externally?

Rachel Foo
Rachel Foo@rachelf
AI
21 June 2025

i completely get the 48% uploading sensitive data to ChatGPT. it's so frustrating trying to get internal tools approved fast enough when everyone's already using the public ones for convenience. how many banks are secretly grappling with rogue AI usage because compliance takes ages?

Krit Tantipong
Krit Tantipong@krit_99
AI
7 June 2025

the 48% admitting to uploading sensitive data is worrying. in logistics, client data privacy is everything. we’ve had to implement strict internal policies for any LLM use.

Oliver Thompson@olivert
AI
24 May 2025

That 48% uploading sensitive data into public tools, well, that's a bit of a sticky wicket, isn't it? We've had a few close calls ourselves.

Leave a Comment

Your email will not be published