Morning Overview

Anthropic accuses Chinese AI labs of cyberattacks and massive data theft

Anthropic, the San Francisco-based maker of the Claude AI model, has publicly accused three Chinese artificial intelligence laboratories of conducting millions of so-called distillation attacks against its systems. The company named DeepSeek, Moonshot, and MiniMax as the firms responsible for systematically extracting knowledge from Claude through high-volume automated queries. The allegations land at a moment of escalating friction between U.S. and Chinese technology sectors over intellectual property, export controls, and the competitive balance in AI development.

Millions of Queries Aimed at Extracting Claude’s Knowledge

Distillation, in the AI industry, refers to a process in which one model is trained to replicate the behavior of another by feeding it large volumes of the target model’s outputs. When performed without authorization, the technique effectively lets a competitor absorb years of costly research and training data by simply querying an API at scale. Anthropic’s disclosure puts hard numbers on the alleged activity: DeepSeek conducted approximately 150,000 queries, while Moonshot carried out roughly 3.4 million, according to the company’s own tracking.

The sheer gap between those two figures is striking. DeepSeek’s 150,000 queries suggest a more targeted extraction effort, while Moonshot’s 3.4 million indicate a sustained, industrial-scale campaign. MiniMax was also named by Anthropic as a participant, though the company’s publicly confirmed metrics center on DeepSeek and Moonshot. These numbers, drawn from Anthropic’s internal monitoring, have not been independently verified by a third party or government agency, a gap that leaves room for the accused labs to contest the claims. For investors and policymakers following the commercial implications, the episode underscores how quickly AI-related disputes can move markets that are already attuned to shifts in technology-sector valuations and geopolitical risk.

Distillation as an Industry Shortcut

Distillation is not inherently illicit. Researchers routinely use smaller “student” models trained on the outputs of larger “teacher” models to compress AI capabilities into lighter, cheaper-to-run systems. The technique is well-documented in machine learning literature and has legitimate applications across the industry. What Anthropic alleges is different in kind: unauthorized, high-volume extraction designed to replicate proprietary capabilities without licensing, collaboration, or compensation. Reporting in the U.S. press has described distillation as an industry tactic with clear potential for misuse, drawing a line between standard research practice and what amounts to large-scale model cloning based on the company’s own account.

The distinction matters because it exposes a structural vulnerability in how frontier AI models are deployed. Companies like Anthropic offer access to Claude through cloud APIs, which means any entity with an account can send queries and collect responses. At low volumes, this is normal usage. At the scale Anthropic describes, it becomes a pipeline for extracting the functional equivalent of proprietary training. No physical breach or traditional hack is required. The attacker simply asks enough questions, in the right way, to reconstruct the target model’s reasoning patterns. This makes enforcement difficult and raises questions about whether current terms-of-service agreements and rate-limiting tools are sufficient to protect billions of dollars in research investment, especially as central banks and regulators increasingly monitor AI’s macroeconomic impact through specialised policy-tracking services.

Export Controls and the U.S.-China AI Rivalry

Anthropic’s accusations carry weight beyond a commercial dispute between private companies. The United States has spent the past several years tightening export controls on advanced semiconductors and AI-related technology to slow China’s progress in frontier AI. If Chinese labs can bypass those restrictions by extracting knowledge directly from American models through API access, the entire export-control framework faces a serious loophole. Anthropic itself has taken a public position on the so-called diffusion rule, a set of proposed regulations governing how AI capabilities spread internationally, which has been cited in broader debates over how to preserve a technological edge while still benefiting from global research and education networks.

The company’s decision to name DeepSeek, Moonshot, and MiniMax publicly, rather than handle the matter quietly through legal channels, suggests a strategic calculation. By making the allegations part of the public record, Anthropic strengthens the case for stricter rules around API access, cross-border data flows, and model-output protections. It also puts pressure on policymakers to treat distillation attacks as a national security concern rather than a contractual violation. None of the three accused Chinese labs have issued detailed public responses to the specific claims, and no U.S. government investigation into the alleged attacks has been publicly confirmed. The dispute is unfolding as subscription-based research outlets encourage corporate clients to deepen their understanding of AI policy risk, including through tailored intelligence packages that track regulatory and competitive developments.

A Shadow Economy of Model Cloning

The pattern Anthropic describes points toward a broader risk that extends well beyond one company’s products. If distillation attacks at this scale prove effective, they could give rise to a shadow economy in which state-backed or well-funded labs in restricted markets replicate the capabilities of Western AI systems without bearing the cost of original research. Training a frontier model from scratch requires enormous compute resources, vast datasets, and teams of specialized researchers. Distillation, by contrast, requires only API access and enough queries to capture the target model’s behavior across a wide range of tasks. That asymmetry is particularly significant for jurisdictions facing export controls on cutting-edge chips, where access to foreign models may appear to offer a faster route to competitive systems.

This dynamic creates a lopsided competitive environment. U.S. firms invest billions in research and development, only to see their outputs potentially absorbed by rivals operating under different legal frameworks. The economic incentive to distill rather than build from scratch is obvious, and absent stronger technical or legal barriers, the practice could accelerate. For Anthropic and its peers, the challenge is twofold: hardening their systems against extraction while persuading regulators that existing intellectual property protections do not adequately cover AI model outputs. Traditional copyright and trade-secret law was not designed for a world in which a product’s core value can be siphoned through a series of cleverly structured questions, a concern that is increasingly reflected in enterprise compliance tools that help large organisations map their licensing needs through dedicated access-management platforms.

What Comes Next for AI Security

Anthropic’s public disclosure shifts the burden of proof in an important way. The company has put specific numbers and specific names on the record, creating a reference point for future policy debates and potential enforcement actions. But the allegations also expose how much of AI security still depends on self-reporting by the companies themselves. Without independent audits, government investigations, or technical standards for detecting distillation, the industry relies on individual firms to monitor their own systems and sound the alarm. That model has clear limitations, especially when the accused parties operate in different legal jurisdictions and may not be subject to the same discovery or enforcement mechanisms.

In the near term, Anthropic and its competitors are likely to respond by tightening rate limits, improving anomaly detection, and experimenting with watermarking or other methods that make it harder to reuse model outputs for training. Policymakers, meanwhile, face the task of updating export controls, data-protection rules, and intellectual property law to account for a technique that turns open-ended queries into a conduit for strategic technology transfer. Whether the alleged attacks by DeepSeek, Moonshot, and MiniMax ultimately lead to formal sanctions, lawsuits, or new regulations, they have already crystallised a central dilemma of the AI era: powerful models are most valuable when widely accessible, yet that very accessibility can make their most advanced capabilities effectively impossible to contain.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.