Anthropic has accused three Chinese AI rivals of orchestrating a coordinated effort to loot Claude data, allegedly using more than 24,000 fake accounts to siphon millions of chat exchanges. The company says the campaigns were designed to copy Claude’s capabilities through so‑called distillation attacks, effectively turning its safety‑tuned model into a covert training set for competitors. The dispute lands as governments debate how to police cross‑border AI development and protect advanced systems from industrial espionage.
DeepSeek
DeepSeek is identified by Anthropic as one of three Chinese AI firms that allegedly ran industrial scale operations to extract Claude data. According to Anthropic, these Chinese AI companies used fake accounts and proxy networks to generate more than 16 million exchanges with Claude, a pattern investigators describe as a textbook distillation attack. By hammering the model with structured prompts, DeepSeek could collect high quality outputs that mirror Anthropic’s reasoning and safety work without licensing access or sharing research.
Reporting on these distillation attacks notes that Anthropic already restricts commercial access to China for national security reasons, which likely pushed operators toward covert scraping rather than official APIs. Critics quoted in separate coverage argue that Anthropic has itself relied on large scale internet data, yet the company insists there is a clear line between public web text and targeted theft of proprietary model behavior. For frontier labs and regulators, DeepSeek’s alleged role illustrates how quickly AI competition can blur into accusations of state linked technology transfer.
Moonshot AI
Moonshot AI, known for its Kimi model upgrade that intensified competition inside China, is another firm Anthropic accuses of siphoning Claude data. Investigators say DeepSeek, Moonshot AI and MiniMax together generated more than 16 million interactions with Claude, supported by Chinese AI operators who allegedly masked their locations. Anthropic claims these exchanges were not normal usage but carefully scripted queries meant to reproduce Claude’s step by step reasoning, safety refusals and reinforcement learning behavior.
The scale of the operation is highlighted in separate reporting that says DeepSeek, Moonshot AI and MiniMax used 24,000 fake accounts to automate scraping. As Moonshot AI pushes Kimi as a rival to Western frontier models, Anthropic’s allegations raise questions about how much of that progress is homegrown and how much is borrowed from Claude. For policymakers weighing export controls on advanced chips, the case is being cited as evidence that model weights are not the only sensitive asset, since behavior can be cloned at scale through persistent querying.
MiniMax
MiniMax is the third Chinese company Anthropic accuses of illicitly harvesting Claude data. According to one detailed account, When Anthropic released a new Claude model during MiniMax’s operation, the Chinese laboratory reportedly redirected nearly half its scraping traffic to the upgraded system. Anthropic interprets this pivot as proof that MiniMax was systematically chasing frontier capabilities, not simply running product comparisons or benign benchmarking.
Anthropic’s public complaint describes three Chinese companies that tried to “illicitly extract” its technology at industrial scale, despite regional access blocks and service restrictions. Additional analysis notes that Anthropic already blocks commercial access to China for national security reasons, a stance examined in coverage that cites Anthropic’s critics. For developers, the MiniMax allegations highlight a growing risk that any public facing AI service can be quietly turned into a training pipeline for competitors, long before regulators agree on enforcement tools.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.