Skip to main content
AI in ASIA
AI trust
Business

Workers Are Using AI More But Trusting It Less

Employee AI usage surges 13% while trust plummets 18%, revealing a troubling gap between adoption mandates and workplace reality

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

AI usage increased 13% while worker trust dropped 18% according to ManpowerGroup research

43% of US workers use AI professionally but only 13% receive company training

29% of AI users operate tools without manager knowledge, indicating oversight gaps

Advertisement

Advertisement

The Workplace AI Paradox: More Usage, Less Confidence

A striking contradiction is emerging in the modern workplace: whilst AI adoption continues its relentless march forward, employee trust in these technologies is eroding. ManpowerGroup's latest research reveals an 18% drop in worker confidence alongside a 13% increase in usage, suggesting the initial enthusiasm has given way to practical frustrations.

This paradox reflects a deeper challenge facing organisations worldwide. Simply deploying AI tools isn't sufficient to drive meaningful productivity gains. The gap between marketing promises and operational reality has left many employees feeling disillusioned with technologies they're increasingly required to use.

The Numbers Tell a Troubling Story

Recent workplace surveys paint a picture of widespread adoption coupled with growing scepticism. In the United States, 43% of workers now use AI professionally, yet only 13% receive company training on these tools. Meanwhile, 29% of users operate AI systems without informing their managers, indicating significant trust and oversight gaps.

The disconnect becomes even more apparent when examining user behaviour. While 84% of AI-using professionals report efficiency benefits, many struggle with fundamental implementation challenges. Tools frequently hallucinate information, require extensive prompt refinement, or demand complete workflow overhauls that negate promised time savings.

As this trend mirrors broader concerns about corporate transparency in AI deployment, organisations must address the underlying causes of declining confidence.

By The Numbers

  • 43% of American workers used AI professionally in Q3 2025, up from 37% the previous quarter
  • Only 13% received company AI training despite widespread adoption
  • 30% of global office workers across 16 countries are classified as AI Power Users in 2026
  • 21% of non-technical workers remain reluctant to embrace AI technologies
  • 84% of AI users report efficiency benefits, yet trust continues declining

When Reality Falls Short of Promise

The enthusiasm gap stems largely from unrealistic expectations set by marketing materials and corporate communications. AI demonstrations often present oversimplified scenarios that bear little resemblance to complex workplace realities.

Tabby Farrar, head of search at UK agency Candour, exemplifies this challenge. Her team actively pursues AI integration for efficiency gains but frequently encounters tools that hallucinate information or demand extensive prompt engineering. These experiences transform promised time savings into additional workload burdens.

"You can't have an intimidated workforce and be fully productive. Most employees are comfortable with their established routines, and AI often demands complete workflow overhauls." , Mara Stefan, VP of Global Insights, ManpowerGroup

This psychological resistance compounds technical challenges. Workers must invest significant mental effort to adapt familiar processes, often questioning whether the disruption justifies the benefits. The result is a workforce simultaneously using and resenting the technologies they're expected to embrace.

Understanding broader workplace anxieties about AI replacement helps explain this resistance.

The Training Deficit Crisis

Perhaps the most damaging factor in declining AI confidence is inadequate organisational support. ManpowerGroup's research found that 56% of respondents received no recent AI training, whilst 57% lacked access to relevant mentorship programmes.

This educational vacuum leaves employees struggling with powerful technologies they neither understand nor trust. Without proper guidance, workers resort to trial-and-error approaches that reinforce negative perceptions of AI reliability and usefulness.

"We often assume that more technology means less connection. But our data tells a different story. The employees most embedded in AI workflows are also the ones most engaged in learning and have better team relationships." , Janet Pogue McLaurin, Global Director of Workplace Research, Gensler

Successful organisations invest heavily in comprehensive training programmes that transform anxiety into competence. They create environments where experimentation is encouraged and failures become learning opportunities rather than sources of frustration.

Companies exploring strategic AI implementation must prioritise human-centred approaches that build confidence alongside capability.

Training Approach Employee Confidence Adoption Success Rate Productivity Impact
Comprehensive programmes High 78% Significant gains
Basic orientation Moderate 45% Marginal improvements
Self-directed learning Low 23% Mixed results
No formal training Very low 12% Often negative

Building Bridges to Better AI Adoption

Forward-thinking organisations are developing strategies that address both technical and psychological barriers to AI adoption. These approaches recognise that successful implementation requires more than software deployment.

Key elements include:

  1. Appointing internal AI champions who understand both technology capabilities and team dynamics
  2. Building buffer time into projects for AI experimentation and learning
  3. Framing new tools as "test and learn" initiatives rather than productivity mandates
  4. Providing regular check-ins and open forums for discussing challenges and successes
  5. Creating tailored AI solutions that address specific organisational needs rather than generic applications

At Candour, Farrar's team exemplifies this thoughtful approach. They've developed a "Gemini Gem" trained on brand guidelines to generate client-ready quotes, demonstrating AI's potential when properly customised. This success story contrasts sharply with their earlier frustrations with generic tools.

The contrast between successful and failed implementations often relates to whether organisations view generational differences in AI adoption as opportunities rather than obstacles.

Why are employees losing trust in AI despite increased usage?

The gap between marketing promises and operational reality creates frustration. Many tools require extensive refinement or produce unreliable outputs, leading to negative experiences that undermine confidence even as usage mandates increase.

What role does training play in AI adoption success?

Comprehensive training programmes dramatically improve adoption rates and user confidence. Without proper education, employees struggle with powerful technologies they neither understand nor trust, creating negative feedback loops.

How can organisations bridge the AI confidence gap?

Successful strategies include appointing internal champions, building experimentation time into workflows, providing ongoing support, and creating tailored solutions that address specific organisational needs rather than deploying generic tools.

What makes some AI implementations more successful than others?

Success correlates with human-centred approaches that prioritise user education, psychological comfort, and practical utility. Organisations that frame AI as workflow enhancement rather than replacement see better adoption rates.

Should companies be concerned about hidden AI usage among employees?

Yes, when 29% of users operate AI without management awareness, it indicates serious governance gaps. This shadow usage often reflects inadequate training and unclear policies rather than malicious intent.

The AIinASIA View: This confidence crisis represents a critical inflection point for workplace AI adoption. Organisations that continue pushing deployment without addressing fundamental trust issues will find themselves with expensive tools and frustrated workforces. The solution isn't slowing AI adoption but accelerating investment in human-centred implementation strategies. Companies that prioritise employee education, provide adequate support systems, and create psychologically safe environments for AI experimentation will emerge as productivity leaders. Those that ignore the confidence gap risk creating permanent resistance to technologies that could otherwise transform their operations.

The path forward requires acknowledging that AI adoption is fundamentally a human challenge wrapped in a technological solution. Success depends not just on choosing the right tools but on creating the right conditions for people to embrace and excel with them.

Regional trust patterns across Asia suggest this challenge extends far beyond individual organisations, requiring industry-wide commitment to better implementation practices.

What specific challenges has your organisation encountered when implementing AI tools, and how have you addressed employee concerns about reliability and usefulness? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Tools Power User learning path.

Continue the path →

Latest Comments (4)

Charlotte Davies
Charlotte Davies@charlotted
AI
19 February 2026

This decline in confidence mirrors broader concerns we're tracking, especially within regulatory bodies. The ManpowerGroup study's finding of an 18% drop, despite increased adoption, points to a crucial disconnect that impacts not just productivity, as Mara Stefan notes, but also the broader societal acceptance required for responsible AI integration. It reinforces the need for robust ethical frameworks, perhaps akin to the principles the UK AI Safety Institute advocates, to ensure that deployment aligns with verifiable utility and trust, rather than just market enthusiasm. The "hallucination" issue Tabby Farrar mentions with prompt refinement is a perfect example of a practical barrier to trust.

Priya Ramasamy@priyaram
AI
11 February 2026

The ManpowerGroup stat about an 18% drop in confidence makes sense. From what I've seen here, a lot of the off-the-shelf AI tools just aren't trained on enough local data or nuances. So the "hallucinations" Tabby Farrar mentions? We get that a lot. It's not always about bad AI, sometimes it's bad fit for market.

Charlotte Davies
Charlotte Davies@charlotted
AI
7 February 2026

This reinforces the need for robust ethical frameworks, as the UK AI Safety Institute advocates. Employees need to understand the limitations of these tools to build trust, especially with issues like hallucination.

Dewi Sari
Dewi Sari@dewisari
AI
5 February 2026

hmm this 18% drop in worker confidence really resonates. i've been playing with some of the open source LLMs for data cleaning at work, and sometimes the 'hallucinations' Tabby Farrar mentions are just as time consuming to fix as doing it manually. makes me wonder if traditional data validation methods are becoming more important as we lean into these tools.

Leave a Comment

Your email will not be published