AI is everywhere, or so the story goes. But when you cut through the hype, how are people actually using it? Three heavyweight studies from OpenAI, Anthropic, and Ipsos finally put some numbers behind the anecdotes. Their findings are surprisingly ordinary, occasionally contradictory, and deeply revealing about what adoption means in practice.
ChatGPT and Claude are mostly used for writing, coding, and information lookups, not futuristic fantasies.,Workplace adoption is split: survey data shows a slowdown, while enterprise API logs suggest steady growth.,Trust is fragile: people use AI daily, but only half say they trust the companies that make it.
The Mundane Reality of ‘Killer Apps’
OpenAI analysed over a million ChatGPT conversations from mid-2024 to mid-2025. The results are clear: the vast majority of queries are about writing, quick guidance, or information lookups. Computer programming makes up only 4 per cent. Reflection-style “therapy chats” barely register.
Anthropic’s Claude tells a different but complementary story. Coding dominates (36 per cent), but education and science are climbing steadily. Crucially, Claude users are more likely to delegate tasks wholesale: “you do it”, rather than engage step by step. This signals a subtle but important shift: moving from assistance to outright substitution. You can learn more about how Claude works in teams by reading our article on how Claude brings memory to teams at work.
The message from both sets of data is consistent. AI is settling into everyday niches where it works best. Forget moonshots. The killer apps are editing emails, fixing code, and summarising research papers. For some practical applications, check out these 20 menial tasks ChatGPT handles in seconds.
Work vs Play: Two Realities
Here is where the contradictions begin. OpenAI reports that ChatGPT’s work-related usage has dropped, from 40 per cent to just 28 per cent. Ipsos’ global survey reinforces this: in many countries, AI is seen more as a personal assistant than a professional backbone.
Yet Anthropic’s enterprise API data says the opposite. It finds that 40 per cent of U.S. employees now use AI at work, up from just 20 per cent in 2023. The logs show heavy-lifting use cases: debugging web apps, writing business software, and even designing other AI systems.
The contradiction may not be a contradiction at all. Chat interfaces attract hobbyists, students, and casual experimenters. API integrations are where serious work happens, often invisibly. Adoption, then, is not shrinking or booming; it is diverging.
The Trust Paradox
Ipsos’ AI Monitor 2025 captures the ambivalence neatly. More than half of respondents (54 per cent) say they trust governments to regulate AI. Just under half (48 per cent) say they trust companies to handle their data responsibly. The gap is small, but telling. This trust deficit is particularly relevant in areas like Southeast Asia, where AI adoption faces unique challenges.
OpenAI’s Sam Altman voiced this paradox at the Paris AI Summit. Safety, he said, is critical “we’ve got to make these systems really safe for people, or people just won’t use them.” Yet he also admitted the louder demand is not safety but scale, cost, and capability. Users worry about trust, then flock back to the platforms they mistrust.
OpenAI’s Sam Altman voiced this paradox at the Paris AI Summit. Safety, he said, is critical “we’ve got to make these systems really safe for people, or people just won’t use them.” Yet he also admitted the louder demand is not safety but scale, cost, and capability. Users worry about trust, then flock back to the platforms they mistrust. For more on the ethical considerations of AI, consider this report on AI ethics from the World Economic Forum.
Why Companies Still Hesitate
Anthropic’s Claude report points out a less glamorous reason why AI adoption stalls. Productivity gains require more than smart models. They demand restructuring processes, retraining staff, and updating data systems costly and time-consuming exercises.
Put simply, AI is not plug-and-play. It is a reengineering project. Without the right context and infrastructure, even the most advanced models stumble.
Meanwhile, Ipsos finds that adoption is heavily skewed by demographics. Young, male, and well-educated users dominate. That leaves large swathes of the population on the margins, even as the most popular personal uses advice and information searches are the ones most prone to hallucination.
Between Usage and Hesitation
Pull the strands together and a pattern emerges. AI in 2025 is used for the ordinary, not the extraordinary. ChatGPT logs show writing dominates. Claude’s enterprise data points to invisible but growing workplace use. Ipsos surveys highlight public hesitation, even as reliance deepens.
The paradox is not whether AI will be used. It already is, woven into daily habits. The real question is whether people and companies will reconcile their scepticism with their behaviour or whether we will sleepwalk into dependency on systems we claim not to trust.
And perhaps that is the bigger story: AI’s future will not be shaped by what we say in surveys, but by what we keep typing into the prompt box.






Latest Comments (3)
Interesting to see the divergence in work-related usage numbers between OpenAI's reports and Anthropic's data, particularly with Claude users "delegating tasks wholesale." This isn't just about what kind of tasks AI handles, but the fundamental shift in human-computer interaction. From a media studies perspective, it hints at a deeper negotiation of agency and control. Are we moving towards a model where the user becomes a kind of content curator for AI-generated outputs, rather than a direct creator? That "you do it" mentality changes the user's relationship with the tool significantly.
It's interesting to see the divergence in reported use cases, especially with OpenAI showing a drop in work-related usage for ChatGPT while Anthropic's Claude highlights coding at 36%. This makes me wonder about the framing of "work-related." Is editing an email considered 'work' if it's for personal communication, or does it only count if it's within a specific enterprise context? The categorisation of these tasks significantly shapes our understanding of what "work" with AI actually means outside of corporate API logs. We need to be careful how we define these boundaries when interpreting adoption trends.
Claude users delegating tasks wholesale, "you do it" style... that's the bit that worries me for us freelancers. If models just do the work, what's left for us?
Leave a Comment