OpenAI, Anthropic and Google Unite to Fight AI Model Copying in China

OpenAI, Anthropic and Google have begun sharing threat intelligence through the Frontier Model Forum to detect and block adversarial distillation attempts by Chinese AI firms. The rare collaboration targets DeepSeek, Moonshot AI and MiniMax, which US labs accuse of systematically querying frontier models to extract capabilities and replicate them at lower cost. Anthropic alone documented 16 million unauthorised exchanges from the three named firms. The sharing mechanism mirrors how cybersecurity companies swap attack signatures - when one lab spots a pattern, it flags it for the others. US officials estimate adversarial distillation costs American AI labs billions annually.
Why it matters for Asia
This is the first coordinated defensive operation between all three frontier labs, and it lands squarely on Asia's doorstep. For enterprise buyers across Southeast Asia who rely on APIs from these providers, the crackdown could tighten access controls and usage monitoring, while Chinese-built alternatives that benefited from distillation may face capability gaps. Policymakers in Singapore, Japan and South Korea - all of whom are drafting AI governance frameworks - now have a live case study in cross-border IP enforcement to factor into their rules.^


