Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
News

"I’m deeply uncomfortable with these decisions" - Anthropic's CEO

Anthropic's CEO admits deep discomfort with AI power concentration as the company warns of the first large-scale AI cyberattack executed autonomously.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Anthropic CEO admits no democratic mandate for AI development decisions by tech leaders

First documented large-scale AI cyberattack executed without human intervention occurred

Regulatory fragmentation across regions creates compliance challenges for AI companies

Anthropic's CEO Calls for Urgent AI Regulation Amid Safety Concerns

Anthropic's chief executive Dario Amodei has delivered one of the strongest calls yet for AI regulation from within the industry. In a candid November 2024 interview on CBS News' 60 Minutes, he expressed deep discomfort with the concentration of power over AI development in the hands of a few technology leaders.

"I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei stated. "And this is one reason why I've always advocated for responsible and thoughtful regulation of the technology."

When pressed by host Anderson Cooper about democratic legitimacy, Amodei's response was stark: "Who elected you and Sam Altman?" Cooper asked. "No one. Honestly, no one," came the reply.

This admission highlights a fundamental tension in AI governance: who should control technologies that could reshape society? Unlike traditional industries where market forces and regulation evolved gradually, AI development is racing ahead faster than oversight mechanisms can adapt.

Advertisement

The First AI Cyberattack Changes Everything

Anthropic's concerns gained urgency following what the company described as "the first documented case of a large-scale AI cyberattack executed without substantial human intervention." This incident, disclosed ahead of the CBS interview, represents a watershed moment for AI security.

The attack validates earlier warnings from cybersecurity experts, including former Mandiant CEO Kevin Mandia, who predicted such AI-agent attacks would materialise sooner than expected. For businesses across Asia, this development signals a new era of cyber threats that traditional security measures may struggle to address.

By The Numbers

  • Anthropic reached a $380 billion valuation in its latest funding round
  • The company donated £16 million to Public First Action, a super PAC focused on AI safety
  • All 50 US states introduced AI-related legislation this year, with 38 adopting safety measures
  • Anthropic's Claude chatbot achieved a 94% political even-handedness rating
  • OpenAI maintains an estimated $500 billion valuation, ahead of Anthropic

The competitive landscape is intensifying. Amodei previously warned that sitting "on the sidelines" would mean Anthropic would "lose and stop existing as a company." This pressure creates an inherent conflict between safety priorities and commercial survival.

Global Regulatory Fragmentation Creates Compliance Challenges

The regulatory landscape varies dramatically across regions. The EU AI Act sets a global benchmark for comprehensive AI regulation, whilst the United States lacks federal AI-specific legislation. This fragmentation poses significant challenges for companies operating internationally.

Region Regulatory Approach Implementation Status
European Union Comprehensive AI Act Fully implemented
United States State-by-state patchwork Fragmented adoption
Singapore Model AI Governance Framework Under development
China Algorithm regulation focus Strict enforcement

In Asia, regulatory approaches differ markedly. Singapore is pioneering agentic AI governance frameworks, whilst Vietnam has enacted Southeast Asia's first comprehensive AI law. These varying requirements create operational complexity for multinational AI companies.

The challenges extend beyond compliance. Cultural nuances around AI ethics vary significantly across Asian markets, requiring tailored approaches for content moderation and user interaction. Companies must navigate not just regulatory differences but also social expectations around AI behaviour.

Safety Theatre or Genuine Commitment?

Anthropic's founding story centres on AI safety. Amodei departed OpenAI in 2021 due to disagreements over safety priorities, taking several researchers with him to establish a competitor focused explicitly on safe AI development.

"There was a group of us within OpenAI that had a very strong belief in two things," Amodei explained to Fortune. "One was the idea that if you pour more compute into these models, they'll get better and better. And the second was that you needed something in addition to just scaling the models up, which is alignment or safety."

Anthropic has implemented several safety measures:

  • Constitutional AI approach that imbues models with values rather than strict rules
  • Responsible Scaling Policy pledging not to release AI capable of catastrophic harm
  • Regular safety reports documenting model vulnerabilities and limitations
  • Transparency initiatives including political neutrality testing
  • Financial support for AI safety research organisations

However, critics question whether these measures constitute genuine safety commitment or strategic positioning. Meta's former chief AI scientist Yann LeCun accused Anthropic of "regulatory capture," suggesting the company uses safety warnings to influence legislation against open-source competitors.

Recent internal tensions underscore these debates. AI safety researcher Mrinank Sharma resigned from Anthropic, citing concerns that "the world is in peril" and expressing frustration with balancing values against commercial pressures.

The Economic Pressure Cooker

The fundamental tension between safety and profitability creates ongoing challenges for AI companies. Amodei acknowledges this pressure candidly, noting that Anthropic faces "incredible commercial pressure" whilst trying to maintain safety standards that exceed industry norms.

Brian Jackson from Info-Tech Research Group explains the financial reality: unlike traditional tech services, large language models carry substantial per-query costs. The infrastructure requirements for AI companies including data centres, GPUs, and cloud computing create ongoing capital expenditure that demands revenue growth.

"As AI scales and as more usage grows, they're not necessarily going to get to that profitability as easily or as quickly, because the cost per prompt is so high," Jackson observed.

This economic reality affects the entire industry. Major tech companies continue pouring billions into Anthropic, whilst Asian competitors like Tencent launch new reasoning models to capture market share.

The competitive intensity means companies must balance innovation speed with safety considerations. Sitting still risks market irrelevance, but moving too fast risks catastrophic failures that could damage the entire industry's reputation.

Three Horizons of AI Risk

Amodei categorises AI risks across three distinct timelines, each requiring different regulatory approaches:

Short-term risks include bias and misinformation, already impacting public discourse and democratic processes. These immediate concerns require swift regulatory intervention and industry self-regulation.

Medium-term threats involve AI systems generating harmful information using enhanced scientific knowledge. This includes potential creation of biological weapons or sophisticated cyber attacks, as demonstrated by Anthropic's recent security incident.

Long-term existential risks centre on AI potentially removing human agency from critical systems. These concerns align with warnings from AI pioneer Geoffrey Hinton about systems that could outsmart and control humans within the next decade.

The AIinASIA View: Amodei's uncomfortable honesty about AI governance highlights a critical moment for the industry. Whilst his calls for regulation are commendable, the fundamental tension between safety and commercial pressure remains unresolved. Asia's fragmented regulatory landscape creates both opportunities and challenges. Countries like Singapore and Vietnam are pioneering governance frameworks that could influence global standards. However, the region's economic importance means any regulatory missteps could either accelerate or derail responsible AI development worldwide. We believe Asia must take a leading role in harmonising AI governance whilst preserving innovation incentives.

Who should regulate AI development globally?

A combination of international bodies, national governments, and industry self-regulation is needed. No single entity can effectively govern a technology with such broad implications across borders and sectors.

Can AI companies truly prioritise safety over profits?

The current venture capital model creates inherent tensions. Companies need sustainable business models that don't compromise safety, potentially requiring new funding structures or regulatory frameworks that support responsible development.

How do cultural differences affect AI regulation in Asia?

Asian markets have varying expectations around privacy, government oversight, and social responsibility. China's structured approach contrasts sharply with Singapore's market-friendly frameworks, requiring companies to adapt strategies by jurisdiction.

What makes Anthropic's safety approach different from competitors?

Anthropic employs Constitutional AI, which trains models using principles rather than rules. The company also publishes detailed safety reports and maintains policies against releasing potentially catastrophic systems.

Why are AI operational costs so high compared to traditional tech services?

Large language models require massive computational resources for both training and inference. Unlike web searches that cost fractions of a penny, each AI interaction requires significant processing power, creating ongoing expenses that challenge traditional tech economics.

The question remains whether the current model of AI development can sustainably balance innovation with safety. As Amodei continues warning about AI firms posing risks to humanity, the industry faces mounting pressure to resolve these fundamental tensions before they spiral beyond control.

Can an industry built on exponential growth truly prioritise long-term safety over short-term gains? What role should Asian governments play in shaping global AI governance standards? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (4)

Charlotte Davies
Charlotte Davies@charlotted
AI
28 February 2026

Always good to see leaders like Amodei acknowledge the lack of democratic oversight here. His "no one. honestly no one" response to Cooper’s question really highlights the urgency for frameworks similar to what the UK AI Safety Institute is developing for fine tuning models.

Divya Joshi
Divya Joshi@divyaj
AI
28 February 2026

️ Good for him.

Natalie Okafor@natalieok
AI
28 February 2026

just sent this to my team. amodei's point about few companies making these critical ai decisions really resonates in healthcare. we're constantly balancing innovation with patient safety and ethical guidelines. the idea that the "ai race" drives development without enough external oversight is a big concern, especially with that documented large-scale ai cyberattack. it highlights how quickly these risks are evolving, and we need robust regulatory frameworks that move just as fast. if even the people buiding it are uncomfortable, that's a serious signal for everyone in the field.

Rizky Pratama
Rizky Pratama@rizky.p
AI
23 February 2026

📱 No one. Honestly, no one." this quote is getting so much traction on LinkedIn. Valid point

Leave a Comment

Your email will not be published

Privacy Preferences

We and our partners share information on your use of this website to help improve your experience. For more information, or to opt out click the Do Not Sell My Information button below.