Anthropic Draws a Line in the Sand, and Pays the Price

The biggest AI story of the year so far. Anthropic refused to let the Pentagon use Claude for fully autonomous weapons or mass domestic surveillance of Americans. The Trump administration responded by labelling the company a "supply-chain risk to national security," effectively banning any government contractor from working with Anthropic. OpenAI moved in within hours to fill the gap, though Sam Altman later admitted it "looked opportunistic and sloppy." Tech workers across Google, OpenAI and the wider industry are now circulating open letters demanding clearer limits on military AI use. Meanwhile, consumers voted with their downloads: Claude hit number one on the Apple App Store. For enterprise AI buyers across Asia, this week crystallised a question every procurement team will now face: what are the ethical limits baked into the tools you are buying, and who decides?
Read more

