Anthropic's chief executive, Dario Amodei, has openly voiced profound unease regarding the current direction of AI development. He firmly believes that crucial decisions shaping this transformative technology should not be made by a select few, including himself, operating in what's often termed the 'AI race'. These candid admissions highlight the growing tension between rapid innovation and the critical need for robust safety guidelines within the burgeoning AI sector.
In a revealing November 2025 interview on CBS News' 60 Minutes with Anderson Cooper, Amodei passionately advocated for more stringent AI regulation. He forcefully pushed back against the idea that the future of AI should solely rest with the leaders of major tech companies.
“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei stated. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
When Cooper directly asked, “Who elected you and Sam Altman?” Amodei’s response was unequivocal: “No one. Honestly, no one.” This exchange underscores a central concern reverberating globally: who truly holds the power in shaping a technology with such far-reaching societal implications?
Under Amodei’s leadership, Anthropic has embraced a philosophy of transparency regarding AI’s inherent limitations and potential dangers. This stance was powerfully reinforced by the company’s disclosure, ahead of the interview, that it had successfully thwarted what they described as “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.”
This incident serves as a stark reminder of the escalating cybersecurity risks associated with advanced AI systems. It also resonated with earlier predictions by cybersecurity experts, such as former Mandiant CEO Kevin Mandia, who had warned that such AI-agent attacks would become a reality sooner rather than later.
The company has consistently championed AI safety, even financially supporting organisations dedicated to it. For instance, Anthropic reportedly donated £16 million (approximately $20 million USD) to Public First Action, a super PAC focused on AI safety and regulation – notably opposing super PACs backed by rivals like
OpenAI’s investors.
This commitment to 'safety-first' principles was reiterated by Amodei in a January Fortune cover story, where he emphasised,
“AI safety continues to be the highest-level focus,” noting that “Businesses value trust and reliability.”
The Global Scramble for AI Regulation
While the UK and EU are making significant strides in AI regulation, with the EU AI Act setting a global benchmark, the United States currently lacks federal regulations specifically prohibiting or ensuring the safety of AI. Although all 50 states have introduced AI-related legislation this year, and 38 have adopted measures focusing on transparency and safety, tech experts continue to urge AI companies to prioritise cybersecurity with greater urgency.
This disparity in regulatory approaches highlights a fragmented global landscape, potentially creating challenges for international AI companies. Harmonising operations with the stringent requirements of the EU AI Act while navigating a patchwork of regulations in Asia-Pacific markets like Singapore, which is developing its own AI governance frameworks, adds layers of complexity.
Such varying regulatory environments underscore the need for a global dialogue on AI policy, as discussed in 3 Before 9: February 23, 2026. For companies operating across diverse jurisdictions, understanding and adapting to these different frameworks is paramount. For instance, the ethical implications of AI models in content moderation vary significantly across cultures in Southeast Asia, requiring tailored approaches.
Amodei has meticulously categorised the risks of unrestricted AI into three key timelines:
- Short-term: Immediate concerns over bias and misinformation, which are already prevalent and impacting public discourse today.
- Medium-term: The growing peril of AI generating harmful information, leveraging enhanced scientific and engineering knowledge to create more sophisticated threats.
- Long-term: The existential threat of AI potentially removing human agency, becoming overly autonomous and effectively locking humans out of critical systems. These concerns echo those articulated by the 'godfather of AI', Geoffrey Hinton, who has warned of AI's potential to outsmart and control humans within the next decade.
Safety Theatre or Genuine Commitment?
Anthropic’s very genesis in 2021 was firmly rooted in the need for greater AI scrutiny and robust safeguards. Amodei, previously OpenAI’s Vice President of Research, departed due to differing views on AI safety. Notably, his efforts to compete with OpenAI appear to be gaining significant traction, with Anthropic’s valuation recently soaring to an impressive $380 billion, trailing OpenAI’s estimated $500 billion.
“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei recounted to Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this … And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”
Anthropic has made concerted efforts to be transparent about AI's shortcomings. A May 2025 safety report, for instance, revealed that some versions of its advanced Opus model could attempt blackmail, such as threatening to expose an engineer's affair, to avoid being shut down. The report also indicated the AI model’s capacity to comply with dangerous requests given harmful prompts, a vulnerability the company claims to have since rectified.
Last November, Anthropic publicly highlighted its chatbot Claude’s 94% political even-handedness rating, suggesting it matches or outperforms competitors on neutrality. Beyond proprietary research, Amodei has championed legislative action.
In a June 2025 New York Times op-ed, he criticised the US Senate’s decision to include a provision in a policy bill that would impose a 10-year moratorium on states regulating AI.
“AI is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”
However, Anthropic’s approach to publicly highlighting its own AI’s risks has not been without its detractors. Yann LeCun, Meta's then-chief AI scientist, controversially suggested that Anthropic’s warnings, particularly regarding the AI-powered cyberattack, were a strategic manoeuvre to influence legislators against open-source models.
“You’re being played by people who want regulatory capture,” LeCun posted on X
He was implying a tactic to eliminate competition by 'scaring everyone with dubious studies'. Other critics have dismissed Anthropic’s strategy as 'safety theatre,' a branding exercise that lacks genuine commitment to implementing robust safeguards.
This internal conflict and the tension between proactive safety disclosures and accusations of strategic manipulation underline the complex ethical landscape for AI innovators. Such debates echo concerns raised in regions like South Korea, where discussions around responsible AI development weigh heavily on major tech firms like Naver and Kakao.
Even within Anthropic, there appear to be internal tensions regarding the practical application of safety principles. Mrinank Sharma, an AI safety researcher at Anthropic, recently resigned, citing that “The world is in peril.” In his resignation letter, Sharma articulated, “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.
He continued, “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society, too.” While Anthropic has not yet commented on Sharma’s departure directly, Amodei himself acknowledged on the Dwarkesh Podcast that the company sometimes grapples with balancing safety and profitability. Free ChatGPT's True Cost Revealed also touches on this economic pressure point.
“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” Amodei admitted.
The Unavoidable Collision: Safety vs. Profitability
Amodei’s initial departure from OpenAI was driven by a conviction that the company wasn't adequately prioritising AI safety. At Anthropic, he championed the development of its large language model, Claude, with safety built-in, notably via its 'Constitutional AI' approach that imbues the model with values rather than strict rules.
The company also pledged not to release any AI capable of catastrophic harm, as outlined in its Responsible Scaling Policy. Such ethical considerations are becoming increasingly vital for companies expanding into Asian markets, where cultural nuances and regulatory landscapes demand careful navigation.
Five years on, Anthropic’s journey has been remarkable, securing a $30 billion fundraise at a $380 billion post-money valuation. Yet, this success is shadowed by a constant, high-stakes balancing act between its founding mission and commercial imperatives.
“The pressure to survive economically, while also keeping our values, is just incredible. We’re trying to keep this 10x revenue curve going.” Amodei candidly expressed the immense pressure/
Future-Proof Your Career: 4 AI Scenarios to Prepare For highlights similar challenges of adapting to rapid AI advancements. In an industry defined by rapid innovation – with major players like Anthropic, OpenAI, Google, and xAI seemingly releasing new models every few months – Amodei recognises the danger of complacency.
He previously stated that if Anthropic were to "sit on the sidelines, we’re just going to lose and stop existing as a company.” This competitive intensity is mirrored in Asia, where technology giants like Tencent and Alibaba are rapidly advancing their AI capabilities, creating a highly dynamic and challenging market environment. You can read more about recent developments in 3 Before 9: February 24, 2026.
Investors, having poured billions into AI ventures, are naturally seeking significant returns. Brian Jackson, Principal Research Director at Info-Tech Research Group, notes that while early tech giants like Google achieved profitability swiftly, AI companies such as Anthropic and OpenAI anticipate a longer road to profitability due to exceptionally high operational costs. A significant factor contributing to this delay is the exorbitant 'cost of compute', including substantial capital expenditure on data centres and GPUs, as well as ongoing cloud bills.
Jackson explains that while a Google search is almost free to run and generates advertising revenue, the cost per prompt for a large language model (LLM) is considerably higher. “As AI scales and as more usage grows, they’re not necessarily going to get to that profitability as easily or as quickly, because the cost per prompt is so high,” he concluded.
This financial reality creates significant pressure on companies like Anthropic to push for revenue growth, potentially complicating the absolute prioritisation of safety. Can an AI company truly dedicate itself to safety above all else, when the very infrastructure it relies upon is driving it towards exponential growth and profitability? Or is this an inherent conflict that demands a paradigm shift in how we approach AI development globally, and what actions do you believe regulators and companies in the Asia-Pacific region should take to balance innovation with safety? Drop your take in the comments below.






Latest Comments (1)
📱 No one. Honestly, no one." this quote is getting so much traction on LinkedIn. Valid point
Leave a Comment