The AI Boom Nobody Asked For: Why Public Hostility Is Rattling Silicon Valley
There is a peculiar dynamic playing out in the technology sector right now. Billions of dollars are being poured into artificial intelligence infrastructure, chief executives are giving breathless keynotes about civilisational transformation, and the public response is somewhere between deep scepticism and outright contempt. Welcome to the most unloved boom in modern economic history.
That is not hyperbole. Historians of financial bubbles are struggling to find a precedent for a technology wave that has generated this level of active hostility from the very consumers it is supposed to serve. And as the data accumulates, the gap between what AI executives are selling and what the public is buying grows harder to ignore.
By The Numbers
- 3% of US AI users who were regularly paying for AI tools in mid-2025, despite widespread availability of consumer products
- 60% of respondents in a 2025 Pew Research survey said they want more control over how AI is used in their lives
- 17% of respondents in the same Pew survey said they are comfortable with AI remaining in the hands of a small number of tech billionaires
- Investor sentiment in AI turned measurably cold by December 2025, following months of uncritical institutional hype
The Quote That Says Everything
William Quinn, co-author of the 2020 financial history Boom and Bust: A Global History of Financial Bubbles, put the situation bluntly when speaking to the New York Times.
"I can't really remember a boom with such active hostility to it. People usually find new technology exciting. It happened with electricity, bicycles, motorcars. There were fears but also hopes. AI is notable, perhaps unique, for the lack of enthusiasm." - William Quinn, Co-author, Boom and Bust: A Global History of Financial Bubbles
That observation is striking precisely because it cuts against the standard narrative that public resistance to new technology is temporary. Electricity was frightening. The internet was confusing. But both generated genuine popular excitement alongside the fear. AI, apparently, is different. The hostility is not a transitional phase. It appears structural.
And yet the executives most invested in AI's success seem genuinely bewildered by it. Nvidia's chief executive Jensen Huang described the situation in terms that would feel at home in a therapy session.
"It's extremely hurtful, frankly." - Jensen Huang, Chief Executive Officer, Nvidia
Huang went on to argue that AI is suffering serious reputational damage from "very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative." His framing positions scepticism as misinformation rather than a rational response to observable outcomes.
The Altman Problem: Selling a Product Nobody Is Buying Into
OpenAI's chief executive Sam Altman has expressed similar frustration. Speaking at the Cisco AI Summit, he described the pace of AI adoption in broader society as "surprisingly slow," lamenting pushback against the "diffusion, the absorption" of AI into everyday life. The implication is clear: if people aren't adopting AI fast enough, the problem is with people's attitudes, not with the product.
This is a remarkable position for any chief executive to hold publicly. Consumer resistance is typically treated as market signal, not a communications failure. When a product faces sustained public hostility, the conventional response is to examine the product. Instead, the AI industry's most prominent voices have largely doubled down, treating the public's reluctance as a PR problem to be managed rather than feedback to be processed.
For a deeper look at how relentless AI productivity pressure is affecting the people who use these tools daily, see our piece on the cognitive toll of AI-driven productivity culture.

What the Data Actually Shows
The 2025 Pew Research findings deserve more attention than they have received. The headline result is damning: 60 percent of respondents want greater control over AI's role in their lives. Only 17 percent are comfortable with the technology remaining concentrated in the hands of a small number of wealthy technologists. These are not fringe positions. They represent majority and near-supermajority views.
The payment data is equally sobering. In mid-2025, with consumer AI tools widely available and aggressively marketed, only 3 percent of US AI users were paying for those tools on a regular basis. The freemium model can mask this for a time, but it cannot obscure the underlying truth: most people who engage with AI do not value it enough to pay for it. That is a meaningful signal about perceived utility, not just price sensitivity.
- Public distrust of AI is concentrated around issues of control, transparency, and accountability
- The technology's association with job displacement, academic dishonesty, and military targeting has deepened resistance
- Institutional investor sentiment shifted noticeably negative by the end of 2025, suggesting the hype cycle may have peaked
- The "vocal minority" defence used by AI boosters is contradicted by consistent, large-sample survey data
The Asia-Pacific Picture
The public hostility dynamic is not uniform across geographies, and Asia-Pacific presents a genuinely more complex picture. In markets such as Japan, South Korea, and Singapore, AI adoption has been pushed hard at the institutional level, with governments actively integrating AI into public services and education systems. Consumer sentiment in these markets has been more mixed than the flat hostility documented in North American surveys.
China presents a different case entirely. Beijing's five-year AI development plan has created a policy environment in which AI adoption is treated as a national strategic priority rather than a consumer choice. The question of public enthusiasm is, in that context, somewhat beside the point. You can read more about China's state-directed AI revolution and its five-year technology drive for broader context on how that model diverges from Western market dynamics.
In India, where the AI tools market is growing rapidly among younger, urban demographics, adoption has been driven more by practical utility in job markets and education than by enthusiasm for the technology itself. This is a notable distinction: people using AI because they feel they have to, not because they want to. That is a fragile foundation for a consumer market.
Across the region, small and medium-sized businesses are engaging with AI tools on strictly pragmatic terms. Our coverage of how small businesses are navigating the AI era shows a pattern of selective, sceptical adoption rather than wholesale buy-in.
Energy infrastructure concerns are also shaping the AI debate in Asia-Pacific. The extraordinary power demands of large AI data centres are generating genuine policy friction in markets including Singapore and South Korea, where grid capacity is a binding constraint. Innovative proposals such as floating data centres designed to address the energy crisis reflect the scale of the challenge the industry faces in sustaining its expansion.
The Structural Mismatch at the Heart of the AI Bubble
There is a fundamental tension that the industry's boosters have not adequately addressed. The case for AI has been made almost entirely in the language of productivity, efficiency, and GDP growth. These are arguments that resonate with investors, analysts, and policy makers. They have limited purchase with a working parent worried about their child's education, a graduate worried about the job market, or a citizen worried about autonomous weapons systems.
The comparison to previous technology booms is instructive in ways Quinn's observation does not fully capture. Electricity, the motorcar, and the internet all had visible, tangible benefits that individuals could access and appreciate directly. AI's benefits, in contrast, are often abstract, aggregate, or accrue primarily to organisations rather than individuals. The costs, however, are highly personal: job insecurity, loss of creative agency, data privacy erosion, and a pervasive sense that important decisions are being delegated to systems nobody can explain or hold accountable.
| Technology | Public Sentiment at Launch | Visible Individual Benefit | Primary Concern |
|---|---|---|---|
| Electricity | Excited with some fear | Immediate, personal (light, heat) | Safety |
| Motor car | Enthusiastic | Mobility, freedom | Danger to pedestrians |
| Internet | Curious, then enthusiastic | Information access, communication | Privacy, addiction |
| AI (2023-2025) | Sceptical to hostile | Diffuse, often organisational | Jobs, control, accountability |
The industry's response to this mismatch has been to intensify its communications efforts rather than rethink its product positioning. That strategy may sustain investment flows in the short term. It will not rebuild public trust.
Frequently Asked Questions
Why do people distrust AI so much compared to previous technologies?
Unlike electricity or the internet, AI's benefits are largely abstract or accrue to organisations rather than individuals, while the costs such as job displacement, loss of privacy, and reduced human agency are personal and immediate. A 2025 Pew Research survey found 60 percent of respondents want more control over how AI is used in their lives, reflecting deep structural concern rather than temporary unfamiliarity.
Are AI companies actually losing money because of public hostility toward AI?
Consumer revenue data suggests significant limitations on monetisation. In mid-2025, only 3 percent of US AI users were paying for tools regularly. Investor sentiment also turned noticeably colder by December 2025. The industry remains heavily dependent on enterprise and infrastructure investment rather than direct consumer revenue, which makes it vulnerable to shifts in institutional confidence.
Is AI adoption higher in Asia than in Western markets?
The picture is mixed. In markets like China and Singapore, government policy actively promotes AI adoption, which shapes uptake differently from consumer-led Western markets. In India, adoption is growing among younger urban populations, often driven by economic necessity rather than enthusiasm. Across Asia-Pacific, small business adoption tends to be selective and pragmatic rather than enthusiastic.
So given what you now know about where public sentiment on AI actually stands, what would it take for you to genuinely trust an AI product with something important in your own life? Drop your take in the comments below.







No comments yet. Be the first to share your thoughts!
Leave a Comment