Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
Protest signs against AI in a city plaza
Voices

Why Everyone Hates AI and How to Fix It

Visceral public hostility to AI isn't irrational. It's structural, and Silicon Valley keeps making it worse.

Intelligence Desk14 min read

Public hostility to AI has moved beyond online commentary into visible cultural and labour movements worldwide.

AI Snapshot

The TL;DR: what matters, fast.

AI backlash is driven by broken tech trust, job anxiety, and cultural timing, not just fear of the n

Generative AI uniquely attacks creative identity, the top of Maslow's pyramid, unlike previous autom

Asia-Pacific shows lower resistance where AI arrives as useful utility rather than visible cultural

Who should pay attention: AI product founders and marketers | Policy makers designing AI adoption strategies | Creative professionals navigating AI disruption

What changes next: As billions of first-time AI users arrive over the next two years, companies that have built genuine public trust will see adoption rates that dwarf those relying on inevitability arguments.

AI Has a Serious Trust Problem, and Silicon Valley Is Making It Worse

Scroll through TikTok comments on any AI-related video and you will find something striking: not scepticism, not mild concern, but outright hostility. Visceral, cutting, and increasingly mainstream. The technology that Silicon Valley insists will reshape civilisation is, for a large and vocal portion of the public, genuinely despised. That is not a small problem. It is a civilisational one.

Understanding why people hate AI requires more than dismissing critics as Luddites. It demands an honest reckoning with broken trust, economic anxiety, cultural timing, and something far deeper: the way artificial intelligence attacks human identity at its most fragile point.

By The Numbers

  • Vinyl record sales are at a 30-year high, reflecting a broad cultural shift toward analogue and authentic experiences, running counter to AI's synthetic nature.
  • The 2023 SAG-AFTRA actors' strike was the longest in Hollywood history, driven in significant part by fears over AI replacing performers.
  • In the first four years after World War I, more Americans died in car accidents than had been killed in battle in France, illustrating how technology backlash is rarely new.
  • ChatGPT launched in November 2022, coinciding with a period of widespread economic pessimism among American consumers.
  • Trump's State of the Union address, the longest in US history, mentioned AI just three times across nearly two hours, signalling how early the public policy conversation still is.

A Long History of Hating the New

Technology has always attracted critics. Even writing faced opposition. Socrates argued in Plato's Phaedrus that the written word would "introduce forgetfulness into the soul." He was not entirely wrong, but he was profoundly alarmist. The irony, of course, is that we only know his view because Plato wrote it down.

When the printing press arrived in the 1500s, the Swiss scientist Conrad Gessner warned that the explosion of information would be "confusing and harmful" to the mind. Two centuries later, critics argued that newspapers would socially isolate readers and erode the communal ritual of receiving news from the pulpit. The automobile inspired headlines like "Nation Roused Against Motor Killings" in The Times. The phonograph, television, the internet, social media: each arrived to a chorus of alarm.

"We shape our tools and thereafter they shape us." , Marshall McLuhan, media theorist

Some of those fears were justified. Television almost certainly shortened attention spans and amplified cultural polarisation. Social media demonstrably harmed adolescent mental health. The pattern is consistent: new technology brings genuine benefits and genuine harms, and the public is rarely wrong to ask hard questions. What distinguishes AI backlash is its intensity. This is not reflexive fear of the new. It is something more structural.

Five Reasons Why People Hate AI Right Now

1. Bad Timing in a Broken Tech Ecosystem

Coming into the 2010s, the technology sector was culturally ascendant. Working at Google or Facebook carried real social cachet. Sheryl Sandberg's Lean In was a cultural phenomenon. Apple's spaceship campus was under construction. By the time ChatGPT launched in late 2022, the mood had curdled entirely. The Cambridge Analytica scandal had exposed Facebook's contempt for user data. Studies linking Instagram to teenage depression were making front pages. Billions had been lost on meme coins and overpriced NFTs.

AI did not arrive into a welcoming environment. It arrived into one already primed for distrust. Research suggests that views on AI correlate with views on social media: countries that were more positively disposed towards social media when ChatGPT launched have proved more receptive to AI. Countries that view social media as a democratic threat have been far more hostile. AI inherited the sins of an entire industry.

2. Job Anxiety Is Not Irrational

The economic timing compounded the problem. ChatGPT launched at a moment when most consumers were already pessimistic about their financial futures. Into that anxiety walked a technology that its own proponents described using terms like "disruption," "transformation," and "copilot." To someone worried about paying rent, "copilot" sounds like a prelude to redundancy. The word "augmentation" sounds like the first step before elimination.

The instinct to dismiss job fears as irrational misreads the evidence. Knowledge workers in legal, creative, and administrative fields are already seeing AI-driven restructuring. Acknowledging this honestly, rather than deflecting with optimistic projections about net job creation, is the only credible path forward. As this publication has previously explored, AI transformation consistently fails when organisations ignore the human cost of the transition.

3. Creatives Drive Culture, and AI Threatens Them Directly

The sharpest and most culturally influential critics of AI are creative workers. When the filmmakers behind The Brutalist revealed they used AI to improve Adrien Brody's Hungarian accent, the backlash was immediate. Taylor Swift faced criticism for using AI-generated video in promotional material. An episode of the television series The Studio depicted an audience member confronting a studio executive over AI use in production, prompting the character Ice Cube to shout a memorable objection.

Meanwhile, the emergence of AI actors such as Tilly Norwood, covered prominently in The Hollywood Reporter, has kept the debate at peak cultural visibility. Creatives shape opinion. When they are vocally hostile, that hostility ripples into the broader culture in ways that a thousand positive press releases cannot counteract. The 2023 SAG-AFTRA strike, the longest in Hollywood history, made AI an industrial relations flashpoint that defined the news cycle for months.

SAG-AFTRA protest placard against AI use

Protestors at the 2023 SAG-AFTRA strike, the longest in Hollywood history, over AI concerns.

4. Authenticity Is In, and AI Is Synthetic

There is a powerful counter-cultural current running beneath the AI debate. Vinyl sales are at a 30-year high. Generation Z is buying film cameras and "dumb phones." There is a genuine and growing appetite for the analogue, the tactile, and the imperfect. AI, by definition, is synthetic. It produces outputs that are statistically plausible rather than humanly felt.

This tension predates large language models. The nostalgia economy was already booming before transformer architectures became mainstream. AI has accelerated it, but it did not create it. Being offline has become aspirational. Being unplugged signals intentionality and self-possession. Into this cultural climate, the most powerful AI companies are asking people to trust machine-generated text, images, and voice with the same confidence they once extended to human professionals. That is a significant ask.

5. AI Attacks Identity at Its Highest Point

This is the most psychologically acute dimension of the backlash. Previous waves of automation displaced workers at the bottom of Maslow's Hierarchy of Needs: the steam engine replaced physical labour, early software automated clerical tasks. These displacements were painful, but they did not touch what people considered their highest selves.

"AI climbs to the top of the pyramid and begins to dismantle it" , a framework for understanding why knowledge workers feel uniquely threatened by generative AI tools

Generative AI is different. It attacks creativity, professional expertise, and intellectual identity: the capacities that educated, skilled workers have built their sense of self around. A graphic designer whose identity is bound up in beautiful visual work faces something qualitatively different from a factory worker whose role was mechanised in 1975. The factory worker was never told their creativity was replaceable. The graphic designer is being told exactly that, loudly, every day.

TikTok commenters angry about AI are overwhelmingly knowledge workers, people at the top of the educational and economic ladder who assumed their skills placed them beyond displacement. The inversion of automation's usual targets explains much of the visceral quality of the backlash.

How to Actually Fix the AI PR Problem

The genie is not going back into the bottle. AI will achieve mass adoption; the technology trajectory is not seriously in doubt. But the manner of that adoption matters enormously, both for the companies building these tools and for the societies absorbing them. A path through the hostility exists, but it requires honesty and strategic discipline.

Lead With Life-Saving Use Cases

The most compelling applications of AI are the ones that address fundamental human needs. AI systems that detect cancer earlier than any radiologist. Tools that flag sepsis risk in hospital wards. Models that accelerate drug discovery. These applications operate at the base of Maslow's pyramid, preserving life and alleviating suffering. They should be the flagship narratives for AI adoption, not afterthoughts in a press release about model parameters.

Reframe Capability as Problem-Solving

The technology industry has a chronic habit of leading with capability metrics. "This model has one trillion parameters" communicates nothing useful to a nurse or a small business owner. "This product eliminates four hours of weekly paperwork" communicates everything. Some AI companies have already begun shifting away from .ai domains and capability-first messaging, recognising that the audience for technical benchmarks is vanishingly small compared to the audience for solved problems. This shift needs to become universal.

Change the Messenger

The loudest pro-AI voices are venture capitalists and technology chief executives, two of the least trusted groups in public life. An AI marketing campaign fronted by farmers, community health workers, and independent tradespeople would be far more persuasive than any TED talk from a billionaire founder. Real users, filmed honestly, demonstrating genuine benefit: that is the playbook. Vague inspirational montages and thinly veiled competitor attacks are not.

Acknowledge Labour Market Disruption Honestly

The original Luddites were 19th-century English textile workers who destroyed weaving machinery in the 1810s. They were not simply afraid of the new; they understood that new machinery would enrich factory owners while impoverishing skilled craftspeople in the near term. They were largely correct about their own immediate circumstances. Telling a displaced worker that AI will create more jobs in aggregate is not wrong, but it is cold comfort and it is received as dismissive.

The credible position is acknowledgement followed by action: honest recognition that labour market disruption is real, paired with genuine advocacy for retraining programmes, transition support, and worker protections. Countries and companies that skip the acknowledgement will face intensifying resistance. As Vietnam's recent AI legislation demonstrates, some governments are already moving to formalise AI's obligations to workers.

Keep Humans Visible in AI Products

One underexplored approach is building initiatives that place human creativity at the centre of AI-enabled work. A competition inviting people worldwide to produce the best animated short using AI tools, for instance, would demonstrate how the technology levels the playing field for storytellers without institutional resources. The artist remains visible; the AI is a tool, not the subject. More initiatives of this kind would do more for public sentiment than any amount of corporate communications spend.

The Asia-Pacific Picture

The AI trust deficit is not uniformly distributed. Asia-Pacific presents a markedly different and more nuanced landscape than the hostile Western reception described above, though significant variation exists across the region.

In China, AI development is a matter of explicit national strategy. As AIinASIA has reported, China has placed AI at the centre of its next Five-Year Plan, treating it as core industrial infrastructure rather than a consumer product requiring public approval. Public scepticism, to the extent it exists, operates in a very different regulatory and media context.

Singapore has positioned itself as a regional hub for responsible AI deployment, with the Infocomm Media Development Authority (IMDA) publishing model AI governance frameworks adopted as reference points across Southeast Asia. The city-state's relative openness to AI reflects higher baseline institutional trust and a labour market that has historically managed technology transitions through active skills policy.

Japan presents a more complex picture. Cultural enthusiasm for robotics and automation sits alongside genuine concern about AI's impact on creative industries, particularly manga, animation, and music, sectors where Japan has enormous cultural investment. The Japan Federation of Bar Associations has raised formal concerns about AI in legal practice, mirroring the professional anxiety seen in Western markets.

Across Southeast Asia, AI adoption is accelerating in practical domains: e-commerce personalisation, fintech credit scoring, and healthcare triage. AI has already transformed how Asia shops, often without consumers being aware of it. The backlash dynamic is less pronounced where AI is encountered as a background utility rather than a visible cultural imposition. Meanwhile, nations including Vietnam are moving early to structure the relationship between AI and society through law. Vietnam is also investing heavily in AI education from primary school level, a generational approach to building familiarity and reducing fear.

The lessons from Asia-Pacific are instructive for the wider debate. Where AI arrives embedded in useful services, where governments provide clear frameworks, and where the public develops familiarity through practical use rather than media panic, resistance is lower. That is not a coincidence.

Comparing Technology Backlash Across History

Technology Era Primary Fear Outcome
Writing Ancient Greece Memory loss, intellectual decay Enabled civilisational advancement
Printing Press 1500s Information overload, social confusion Democratised knowledge
Automobile Early 1900s Mass casualties, social disorder Transformed mobility; risks were real
Television Mid-1900s Shortened attention spans, passivity Criticisms largely validated
Social Media 2000s,2010s Mental health harm, misinformation Mixed; significant harms confirmed
Generative AI 2020s Job displacement, identity threat, inauthenticity Too early to assess; backlash is structural

What Silicon Valley Gets Wrong About Public Resistance

There is a particular kind of smugness at work in the Valley's response to AI critics. The argument runs roughly: technology always wins, adoption always comes, the critics are always wrong in the end. This is partially true and strategically disastrous. The automobile did win, but it killed hundreds of thousands of people and restructured cities in ways that took a century to partially undo. Television did win, and it contributed to exactly the social harms critics predicted.

Winning the technology race is not the same as winning public trust. A society that adopts AI under duress, resentfully and without adequate safeguards, will produce worse outcomes than one that builds genuine understanding and consent. The rapid expansion of AI into software development through tools like vibe coding is already straining developer communities that feel the change is being imposed rather than chosen.

The billions of people who have not yet used AI are not a problem to be solved through better marketing. They are a constituency whose concerns deserve substantive responses. Silicon Valley's long-term success depends on building trust with that constituency, not steamrolling it.

Frequently Asked Questions

Why do so many people hate AI despite its obvious benefits?

The backlash reflects a convergence of factors: pre-existing distrust of the technology industry after scandals involving social media and data privacy; economic anxiety among workers who fear displacement; a cultural moment that prizes authenticity and analogue experience; and, most significantly, the way generative AI targets creative and intellectual identity rather than just physical or clerical labour. People do not hate the concept of useful tools. They are reacting to a technology that feels threatening to who they are.

Is AI job displacement a real risk or media panic?

Both, simultaneously. The long-run economic consensus is that AI will create new categories of work while eliminating others, as previous waves of automation have done. But this aggregate picture does not help a legal researcher or graphic designer whose specific role is being restructured today. The displacement is real for many individuals even if net job creation ultimately proves positive. Dismissing job fears as irrational is factually imprecise and politically counterproductive.

What can AI companies do to rebuild public trust?

The most effective steps involve changing the message, the messenger, and the frame. Lead with life-saving and problem-solving applications rather than technical capability benchmarks. Use real customers rather than venture capitalists as spokespeople. Acknowledge labour market disruption honestly and advocate for retraining programmes. Keep human creativity visible in AI-enabled products. And resist the temptation to dismiss critics: the public has been right about technology's downsides before.

The AIinASIA View: The AI industry's PR crisis is self-inflicted, and the cure is not better spin. Companies that lead with honesty about disruption, showcase real human benefit, and stop letting VCs do their public communications will pull away from those that keep bulldozing a sceptical public with capability benchmarks nobody asked for.

If you had to pick one AI application to show a sceptical friend or colleague something genuinely useful, what would it be, and has it actually changed their mind? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...