Anthropic, a U.S. artificial intelligence company, says three Chinese AI firms used tens of thousands of fake accounts to take proprietary features from its main AI model, Claude. The company claims this broke its usage rules and could weaken AI export controls.
Allegations of Industrial-Scale Distillation
In a blog post on Monday, Anthropic said the Chinese AI companies DeepSeek, Moonshot AI, and MiniMax made over 24,000 fake accounts to interact with Claude. These accounts generated more than 16 million exchanges with the model. Anthropic claims these interactions helped the companies train or improve their own AI systems using a process called distillation, where a less advanced model learns from the outputs of a more advanced one.
Distillation is a common training method in AI, but Anthropic said the way these companies used it was not allowed. The company argued that the scale and method amounted to taking Claude’s capabilities without permission instead of using the model as intended. “We’ve identified industrial-scale campaigns by three AI laboratories … to illicitly extract Claude’s capabilities to improve their own models,” Anthropic wrote on its official blog.
The allegations mark a rare and serious public dispute between leading AI firms in the U.S. and rising AI developers in China, a dynamic that reflects intensifying competition in foundational model development.
Targeted Capabilities and Security Concerns
A report by Gizmodo said the alleged campaigns targeted areas where Claude is especially strong, such as reasoning, coding, using tools, and moderating data. The accusations also say DeepSeek alone made over 150,000 interactions with Claude to focus on these advanced skills.
Anthropic also framed its claims in a broader context of national security and export control concerns. The AI startup argued that the use of so-called illicit distillation could undermine U.S. efforts to maintain technological leadership — efforts that include export controls on advanced chips and AI technologies — and warned that models developed through such methods might lack safety and ethical guardrails enforced by frontier AI labs.
Debate Over Export Controls and Global Competition
CNN’s coverage of the accusations highlighted that the dispute comes amid ongoing debates in Washington about the effectiveness of U.S. export controls on advanced AI chips to China. Anthropic and other industry players argue that controlling access to such hardware is essential to prevent companies from leveraging raw computational power or training resources that could accelerate unauthorized model improvement.
Anthropic’s blog post did not mention if the accused companies had replied to the claims. CNN also said it was still trying to contact DeepSeek, MiniMax, and Moonshot AI for their comments.
Industry Implications
These claims come after similar warnings from OpenAI earlier this month. OpenAI said some Chinese AI developers used distillation to boost their competitive edge by using outputs from U.S. AI models.
While distillation remains a common technique within AI research, the debate underscores growing tensions over how much of modern AI competition is driven by legal and ethical innovation versus attempts to extract advantage from existing frontier systems. As the industry evolves, questions about intellectual property, safeguards and geopolitical rivalries are likely to grow more prominent.