AI is everywhere, or so the story goes. But when you cut through the hype, how are people actually using it? Three heavyweight studies from OpenAI, Anthropic, and Ipsos finally put some numbers behind the anecdotes. Their findings are surprisingly ordinary, occasionally contradictory, and deeply revealing about what adoption means in practice.
ChatGPT and Claude are mostly used for writing, coding, and information lookups, not futuristic fantasies.,Workplace adoption is split: survey data shows a slowdown, while enterprise API logs suggest steady growth.,Trust is fragile: people use AI daily, but only half say they trust the companies that make it.
The Mundane Reality of ‘Killer Apps’
OpenAI analysed over a million ChatGPT conversations from mid-2024 to mid-2025. The results are clear: the vast majority of queries are about writing, quick guidance, or information lookups. Computer programming makes up only 4 per cent. Reflection-style “therapy chats” barely register.
Anthropic’s Claude tells a different but complementary story. Coding dominates (36 per cent), but education and science are climbing steadily. Crucially, Claude users are more likely to delegate tasks wholesale: “you do it”, rather than engage step by step. This signals a subtle but important shift: moving from assistance to outright substitution. You can learn more about how Claude works in teams by reading our article on how Claude brings memory to teams at work.
The message from both sets of data is consistent. AI is settling into everyday niches where it works best. Forget moonshots. The killer apps are editing emails, fixing code, and summarising research papers. For some practical applications, check out these 20 menial tasks ChatGPT handles in seconds.
Work vs Play: Two Realities
Here is where the contradictions begin. OpenAI reports that ChatGPT’s work-related usage has dropped, from 40 per cent to just 28 per cent. Ipsos’ global survey reinforces this: in many countries, AI is seen more as a personal assistant than a professional backbone.
Yet Anthropic’s enterprise API data says the opposite. It finds that 40 per cent of U.S. employees now use AI at work, up from just 20 per cent in 2023. The logs show heavy-lifting use cases: debugging web apps, writing business software, and even designing other AI systems.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
The contradiction may not be a contradiction at all. Chat interfaces attract hobbyists, students, and casual experimenters. API integrations are where serious work happens, often invisibly. Adoption, then, is not shrinking or booming; it is diverging.
The Trust Paradox
Ipsos’ AI Monitor 2025 captures the ambivalence neatly. More than half of respondents (54 per cent) say they trust governments to regulate AI. Just under half (48 per cent) say they trust companies to handle their data responsibly. The gap is small, but telling. This trust deficit is particularly relevant in areas like Southeast Asia, where AI adoption faces unique challenges.
OpenAI’s Sam Altman voiced this paradox at the Paris AI Summit. Safety, he said, is critical “we’ve got to make these systems really safe for people, or people just won’t use them.” Yet he also admitted the louder demand is not safety but scale, cost, and capability. Users worry about trust, then flock back to the platforms they mistrust.
OpenAI’s Sam Altman voiced this paradox at the Paris AI Summit. Safety, he said, is critical “we’ve got to make these systems really safe for people, or people just won’t use them.” Yet he also admitted the louder demand is not safety but scale, cost, and capability. Users worry about trust, then flock back to the platforms they mistrust. For more on the ethical considerations of AI, consider this report on AI ethics from the World Economic Forum.
Why Companies Still Hesitate
Anthropic’s Claude report points out a less glamorous reason why AI adoption stalls. Productivity gains require more than smart models. They demand restructuring processes, retraining staff, and updating data systems costly and time-consuming exercises.
Put simply, AI is not plug-and-play. It is a reengineering project. Without the right context and infrastructure, even the most advanced models stumble.
Meanwhile, Ipsos finds that adoption is heavily skewed by demographics. Young, male, and well-educated users dominate. That leaves large swathes of the population on the margins, even as the most popular personal uses advice and information searches are the ones most prone to hallucination.
Between Usage and Hesitation
Pull the strands together and a pattern emerges. AI in 2025 is used for the ordinary, not the extraordinary. ChatGPT logs show writing dominates. Claude’s enterprise data points to invisible but growing workplace use. Ipsos surveys highlight public hesitation, even as reliance deepens.
The paradox is not whether AI will be used. It already is, woven into daily habits. The real question is whether people and companies will reconcile their scepticism with their behaviour or whether we will sleepwalk into dependency on systems we claim not to trust.
And perhaps that is the bigger story: AI’s future will not be shaped by what we say in surveys, but by what we keep typing into the prompt box.












Latest Comments (2)
Interesting read. It's a bit hard to square away the "widespread usage" with "low trust," isn't it? Feels like a lot of folks are just hopping on the bandwagon without really understanding the full picture. Like, are we using it because it's genuinely useful, or just 'cos everyone else is? Makes you wonder a bit.
This was a cracking read, really sheds light on how things are shaping up with AI. The bit about "ordinary use cases" dominating doesn't surprise me one jot, feels like everyone and their grandad is dabbling with it for something mundane. But I must say, the "widespread usage despite low trust" part? That feels a bit… too neat, if you know what I mean. Are people *really* using these tools even if they don't fully trust them, or is it more a case of not understanding the implications and just going with the flow? Seems like a bit of a leap to connect those two dots so confidently without more granular data on the *nature* of that trust. Just a thought from my neck of the woods!
Leave a Comment