Morning Overview

Top AI giant says Chinese labs ran 24K fake accounts to steal US tech

Anthropic, a US artificial intelligence company, says it detected roughly 24,000 fraudulent accounts linked to Chinese AI labs that were used to extract proprietary knowledge from its models through a technique known as distillation, according to the Financial Times. The activity generated about 16 million exchanges with Anthropic’s systems, the Financial Times reported, and it comes as U.S. lawmakers are separately pressing for tighter controls on the flow of advanced chips and AI technology to China. Together, these developments sharpen a growing confrontation over who controls the most valuable AI capabilities and how easily they can be copied.

How Distillation Attacks Work as IP Theft

Distillation is not a brute-force hack. It is a methodical process in which an outside party queries a target AI model with carefully chosen prompts, collects the responses, and uses those response patterns to train a separate, cheaper model that mimics the original. Supporters of tighter controls argue the technique can allow a rival lab to reduce development time and computing costs by treating another company’s deployed product as a training signal. When Anthropic flagged the operation, it identified 24,000 fraudulent accounts that had been created specifically to carry out this kind of systematic extraction, generating 16 million exchanges in the process.

The scale of the effort matters because distillation becomes more effective with volume. Each exchange gives the attacking lab a new data point about how the target model reasons, formats answers, and handles edge cases. At that volume, the operation could amass a large corpus of outputs that may help improve a separate model in specific task domains such as coding or math reasoning. Anthropic has framed the campaign as a direct threat to its intellectual property, and the company’s disclosure puts a concrete number on a practice that the broader AI industry has long suspected but rarely quantified publicly. For companies that have poured capital into frontier systems, the prospect that a competitor can harvest those capabilities via API calls raises existential questions about how defensible any one model really is.

Congressional Alarm Over DeepSeek and Nvidia Chips

Anthropic’s allegations do not exist in isolation. The U.S. House Select Committee on the Chinese Communist Party, chaired by Rep. John Moolenaar, has separately focused on DeepSeek, a Chinese AI lab that has drawn heightened attention in recent months. In a letter sent to the Commerce Secretary, Moolenaar stated that Nvidia products are being used by both DeepSeek and the People’s Liberation Army, citing documents the committee had reviewed and urging tighter controls on sensitive chip shipments. The letter called for stronger enforcement of export rules, including restrictions on the Nvidia H200, a high-end processor designed for AI workloads and seen in Washington as a strategic technology.

The committee went further in a separate report titled “DeepSeek Unmasked,” which alleges DeepSeek is integrated into PLA systems and accuses the lab of routing user data in ways that raise surveillance concerns, manipulating outputs to align with Chinese government censorship requirements, and engaging in what the committee described as a likelihood of unlawful distillation. The report also claims DeepSeek has access to advanced chips that should, in theory, be blocked by existing export controls. Taken together, the congressional findings highlight broader concerns in Washington about distillation and chip access, though the committee material is separate from Anthropic’s account-based allegations.

Why Export Controls Keep Falling Short

The gap between policy intent and enforcement is one of the most striking threads running through both the Anthropic disclosure and the congressional investigation. Washington has steadily tightened rules on shipping advanced AI chips to China since late 2022, yet the committee’s work on DeepSeek claims the lab still obtained and used restricted processors. The H200, which Nvidia markets as a flagship AI training chip, was specifically discussed in the committee’s review of export-rule enforcement, and the findings imply that existing controls have not prevented Chinese entities from acquiring the hardware they need. Loopholes in third-country transshipment, gray-market resellers and the difficulty of tracking complex supply chains all blunt the impact of formal restrictions.

Distillation attacks represent a separate and arguably harder problem to regulate. Even if chip export controls worked perfectly, a Chinese lab with internet access and enough fake accounts could still query American AI models millions of times and extract their behavior. That is precisely what Anthropic says happened. No tariff, no chip ban, and no licensing regime currently addresses this vector in a systematic way. API providers can impose rate limits, flag suspicious usage patterns, and require identity verification, but those are private-sector defenses that vary in rigor from company to company. The financial stakes are high for leading AI developers, and investors closely watch how companies protect model advantages, including through market data and earnings disclosures.

How Companies and Regulators May Respond

For American AI companies, the Anthropic case is a warning that model deployment itself creates a new attack surface. Every API call is a potential data leak if the caller is systematically harvesting responses. Companies will likely respond by investing more heavily in anomaly detection, tightening account verification, and potentially restricting access from certain jurisdictions. Some may move toward gated access models where only vetted enterprise customers can interact with their most capable systems, reserving open endpoints for weaker models that reveal less about proprietary architectures. That shift could reshape the competitive landscape, favoring firms with large salesforces and compliance teams able to vet customers in the same way financial institutions perform know-your-client checks.

For policymakers, the twin pressures of chip smuggling and model distillation demand different tools. Hardware controls target supply chains and customs enforcement. Distillation defenses, by contrast, require cooperation from AI companies, international agreements on acceptable use of deployed models, and potentially new legal frameworks that treat systematic model querying as a form of trade-secret theft. The committee’s work on DeepSeek explicitly ties these issues together by alleging that the same entity benefits from both restricted chips and stolen model outputs. Whether Congress acts on that framing with legislation or leaves enforcement to existing agencies and policy mechanisms will shape the next phase of U.S. AI strategy, a debate that is increasingly intertwined with broader discussions of economic resilience and policy risk in technology sectors.

A Global Competition Over AI Capabilities

The Anthropic episode also highlights how global the AI race has become, and how quickly expertise can spread once leading systems are deployed. Chinese labs have invested heavily in local talent, often drawing on engineers and researchers trained at elite institutions that feature prominently in international education rankings. That human capital, combined with access to either smuggled chips or distilled model behavior, narrows the gap with U.S. firms that once assumed their head start in both compute and algorithms would be hard to erode. From Washington’s perspective, the risk is not only military: a world in which multiple states and well-funded companies can cheaply clone frontier models could undermine U.S. leverage in setting safety norms and technical standards.

At the same time, companies and public institutions increasingly rely on specialised data and subscription services to track developments in AI, from export-control enforcement to corporate investment. Industry executives weighing how much to reveal about their models’ inner workings must now assume that rivals, regulators and investors are all reading from the same detailed information services, and that any sign of weakness in IP protection will be quickly priced into partnerships and valuations. That feedback loop raises the pressure on AI labs to demonstrate not just technical progress but also credible defenses against the sort of extraction Anthropic described.

As scrutiny intensifies, organisations exposed to AI risk are paying closer attention to reporting and legal analysis. Corporate compliance teams, trade lawyers and policy advisers are turning to enterprise-grade research tools to understand how enforcement trends, court decisions and geopolitical tensions might affect their AI strategies. Firms that depend heavily on machine-learning models are increasingly exploring licensing solutions that give entire teams access to the same baseline of information, reducing the risk that critical decisions about model deployment, security investment or cross-border partnerships are made in isolation. In that sense, the battle over distillation and export controls is not only a technical or diplomatic contest, but also an information race in which understanding the rules may prove as important as writing them.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.