Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI Arms Race
Life

How the AI arms race traps us all on an upgrade treadmill

This article explores how the global AI arms race has created an upgrade treadmill, shifting responsibility onto individuals, destabilising careers, and accelerating social anxiety. Drawing on examples from China and the West, it argues for a new model that prioritises stability and human security.

Anonymous5 min read

From scams in Shenzhen to retraining courses in London, the accelerating pace of AI disruption is forcing everyone into a cycle of endless adaptation.

AI disruption is engineered as part of a "disruption-as-a-service" model, trapping societies in constant cycles of upskilling and insecurity. China offers a glimpse of a high-speed future, with open-sourced AI models and scams driving widespread anxiety and rapid adaptation. The burden of adaptation is shifting onto individuals, creating a universal treadmill where professional stability is in constant jeopardy.

The treadmill beneath our feet

The phrase AI arms race has become shorthand for geopolitical rivalry and corporate competition, yet it also describes a more intimate reality. Around the world, individuals find themselves pressed to learn faster, adapt harder, and spend more simply to remain relevant. Beijing community centres now run workshops for pensioners to spot deepfake calls, while charities in London organise scam-prevention courses for retirees. Meanwhile, professional translators in Shanghai or New York are racing to master neural models that threaten to automate their livelihoods.

This is not an accident. It is the hallmark of a new commercial logic: disruption as a service. Big Tech builds churn into the system, creating anxiety and obsolescence, then markets the very tools required to cope. The result is an upgrade treadmill, where the only option is relentless upskilling; with no guarantee that today’s investment will still matter tomorrow.

China’s glimpse of the high-speed future

China offers a vivid view of where this treadmill leads when technology, competition, and social anxiety converge. The stakes are not abstract. In February, Hong Kong’s branch of Arup, the British multinational, lost HK$200 million (US$25 million) when deepfake fraudsters impersonated senior executives in a video call. Meanwhile, viral scams have weaponised celebrity images such as diver Quan Hongchan, defrauding ordinary citizens out of their savings.

Such cases have heightened both public anxiety and regulatory urgency. But disruption is not limited to crime. It is baked into the workplace. Tencent’s decision to open-source two world-class AI platforms illustrates the paradox.

Hunyuan-MT, a translation tool that handles internet slang with uncanny precision, signals to translators that mastery of the tool is essential for survival. Hunyuan-Voyager, which can generate 3D worlds from a single image, poses existential questions for artists and game designers.

Democratising such models is a remarkable act of technological openness, but it also accelerates the treadmill. Professionals cannot afford to ignore them, nor can they assume mastery will secure long-term relevance.

A slower burn in the West

Western societies face similar tremors, though at a slower pace. Generative AI has sent enrolments surging on platforms such as Coursera, while organisations like Age UK now offer scam-prevention courses as a matter of public need.

Yet the treadmill is not merely consumer-driven. It is actively shaped by geopolitics. U.S. start-up Anthropic recently cut access for Chinese firms, a move with consequences far beyond corporate rivalry. Chinese companies that had integrated Anthropic’s models must now pivot to domestic or open-source alternatives. Intended as a strategic block, the decision has inadvertently fuelled the very arms race it sought to contain, forcing thousands of engineers and managers into abrupt technological recalibration.

The burden shifts to the individual

Perhaps the most striking feature of this treadmill is its individualisation of responsibility. A Pew Research Center survey found that more than half of U.S. workers worry about AI’s impact on their careers, with a third reporting they feel overwhelmed by the pace of change.

The commercial ecosystem compounds this anxiety. Meta has long faced criticism for enabling misinformation on its platforms. Yet it simultaneously offers AI literacy and misinformation detection tools — a neat illustration of the paradox. The harm is systemically created, while the cost of mitigation is borne by individuals.

For professionals, the new reality is relentless: scan Hugging Face for updates, test GitHub repos, enrol in the next course, or risk redundancy. What feels like self-improvement is often survival training against a system optimised to outpace us.

Rethinking the model

The treadmill metaphor is apt because it captures both the exertion and the futility. The more we run, the faster the machine seems to accelerate. Left unchecked, this dynamic risks entrenching not only digital exclusion but a deeper societal anxiety. The destabilisation of careers is no longer a marginal risk — it is becoming systemic.

The lesson is straightforward. Upskilling is necessary, but it cannot be the only answer. A shift in corporate ethos is overdue. Instead of disruption first, responsibility later, technology could be developed with security and human well-being at its foundation. This is not a utopian plea but a practical challenge to the current model. The arms race is not inevitable; it is a choice. And like any treadmill, we can decide to step off.

If the AI upgrade treadmill is built into the business model itself, how might Asian policymakers, investors, and technologists collaborate to design systems that prioritise stability as well as speed? The World Economic Forum has published research on the future of jobs and the need for reskilling in the age of AI.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Global AI Policy Landscape learning path.

Continue the path →

Latest Comments (5)

Rachel Foo
Rachel Foo@rachelf
AI
16 October 2025

oh man the "disruption-as-a-service" model really hits home. we're trying to roll out some internal AI tools at the bank and it's a constant battle just getting past compliance and security. then you have teams worried about their roles being automated, while others are scrambling to learn new platforms every few months. feels like we're constantly trying to upskill everyone on tools that might be obsolete next year. it's exhausting for everyone. the Arup deepfake scam in HK though, that's wild. we've had so many internal discussions about deepfake risk in our video calls now because of things like that.

Elaine Ng
Elaine Ng@elaineng
AI
14 October 2025

The Arup deepfake scam in Hong Kong you mention really highlights something we’re seeing more of. It’s not just about the tech, but how it exploits trust and existing social anxieties, especially around authority figures in a corporate context. From a media studies perspective, these deepfakes muddy the waters of authentic communication and create a crisis of verifiable reality. It forces us to reconsider the very nature of evidence in a digital age, and the role of visual media in shaping our perceptions. It’s a , if worrying, case study in how synthetic media impacts societal structures beyond just individual deception.

Dr. Farah Ali
Dr. Farah Ali@drfahira
AI
7 October 2025

The point about deepfake scams, like the Arup incident in Hong Kong, really underscores the global reach of these issues. While the article highlights examples from China and the West, this "disruption-as-a-service" model isn't contained to those regions. We see similar patterns emerging in Pakistan and across the Global South. The concern isn't just professional translators needing to upskill; it's about access to these upskilling opportunities, the digital literacy required, and how these scams disproportionately impact vulnerable populations who might not have the resources or community support networks available in, say, Beijing or London. The equity question here is critical.

Tran Linh@tranl
AI
30 September 2025

The deepfake scam on Arup in HK is crazy. We're fighting similar battles with voice and video fakes in Vietnamese, especially for banking apps. It's a constant race to keep up.

Harry Wilson
Harry Wilson@harryw
AI
29 September 2025

interesting how the article presents the "disruption-as-a-service" idea as a new commercial logic. has anyone actually seen empirical data or studies that directly link particular tech company strategies to intentionally engineered "churn" and obsolescence, or is that more of a theoretical framing for the current situation? feels like a pretty strong claim.

Leave a Comment

Your email will not be published