The Pentagon slapped Anthropic, the San Francisco-based maker of the Claude AI assistant, with a “supply chain risk” designation in May 2026, a label previously reserved for companies like Huawei and Kaspersky Lab that the U.S. government considers national security threats. The move immediately barred federal defense contractors from using Claude in sensitive military work, cutting Anthropic off from one of the fastest-growing markets in artificial intelligence.
Anthropic fired back within days, filing a lawsuit in federal court in Washington, D.C., arguing the Defense Department had bypassed the legal process Congress wrote into law for exactly these situations. A federal judge agreed that something looked off and issued a temporary injunction blocking the Pentagon from enforcing the designation while the case proceeds.
The result is a legal standoff with no modern precedent: a major American AI company fighting the same national security machinery typically aimed at foreign adversaries, over a label that could reshape how the entire tech industry interacts with the Defense Department.
What the Pentagon did and what the law requires
The designation invokes Section 3252 of Title 10, a statute that gives the Defense Department authority to exclude companies from its supply chain when it identifies risks of sabotage, counterfeiting, or subversion. The law carries real force: once a company is tagged, contractors across the defense industrial base are directed to stop using that company’s products in covered procurement.
But Section 3252 also imposes procedural requirements. The government must produce written findings and provide notifications before or alongside such a designation. Anthropic’s lawsuit argues the Pentagon skipped those steps entirely, moving straight to enforcement without the documentation Congress demanded as a safeguard against arbitrary action.
The complaint also cites Section 4713 of Title 41, a parallel statute governing supply chain risk actions by executive agencies, which contains its own notice-and-process provisions. Both laws channel judicial review through the D.C. Circuit Court of Appeals, giving Anthropic a clear legal path but also placing the case in a court accustomed to deferring to the executive branch on national security matters.
The regulatory language backing up the designation, found in the Defense Federal Acquisition Regulation Supplement at 48 CFR Section 252.239-7018, ties “supply chain risk” directly to concepts like sabotage and subversion. That vocabulary was built for counterintelligence scenarios, not disputes with domestic AI startups. The same regulation also allows the government to limit how much of its reasoning it discloses, raising the possibility that Anthropic may never see the full justification for the label it is fighting.
Why Anthropic and why now
The Pentagon has not publicly explained what prompted the designation. No official statement has identified a specific vulnerability in Claude, a problematic investor relationship, or a corporate governance concern that would distinguish Anthropic from rivals like OpenAI, Google, or Palantir, all of which hold or are pursuing defense contracts.
That silence has fueled speculation about whether the designation is connected to broader political tensions. Anthropic, led by CEO Dario Amodei, has been one of the most vocal AI companies on the subject of safety regulation, a stance that has at times put it at odds with the Trump administration’s preference for lighter-touch oversight of the technology sector. The company has also publicly supported international AI safety agreements that some administration officials have criticized as unnecessarily restrictive to American competitiveness.
None of that proves the designation was retaliatory, and it would be irresponsible to assert that without evidence. But the gap between the severity of the label and the absence of a public explanation is impossible to ignore. Supply chain risk designations against companies like China’s Huawei came with extensive public documentation, congressional hearings, and years of intelligence community warnings. Anthropic received no comparable public case before the Pentagon acted.
What the court said
A federal judge in the District of Columbia granted Anthropic’s request for a temporary injunction, halting enforcement of the designation while the lawsuit moves through the courts. The order stopped the immediate damage to Anthropic’s government business but did not address whether the Pentagon’s underlying concerns have merit.
The ruling focused narrowly on procedure: whether the government appeared to have followed the steps Congress required before wielding this authority. That framing favors Anthropic at this stage, because the statutes are specific about what the government must do, and the company’s complaint alleges those requirements were not met.
But the case is far from over. The government retains the option to appeal the injunction or to go back and produce the findings and notifications the law demands, potentially re-issuing the designation on firmer procedural ground. The D.C. Circuit, which will ultimately hear the case, has a long history of giving the executive branch wide latitude on national security decisions, even when the process has been messy.
What this means for defense contractors
For now, the injunction means the designation is not being enforced. Defense contractors currently using Claude in their workflows are not required to stop. But that could change quickly if the government prevails on appeal or if the injunction is modified.
Contractors with Anthropic integrations should be mapping exactly where Claude is embedded, from code generation and testing to analytic support and document drafting, so that any required transition can happen without scrambling. For some organizations, swapping one large language model for another is relatively simple. For those that have built custom integrations or fine-tuned systems around Claude, the switch could be expensive and slow.
Legal counsel familiar with defense procurement should be reviewing whether specific contracts fall within the scope of the designation and the court’s order. The statutes do not map neatly onto the rapidly evolving landscape of generative AI, and there is genuine ambiguity about which uses of an AI model count as part of a “covered system” under the DFARS clause.
What this signals for the AI industry
The Anthropic case demonstrates that national security supply chain authorities, tools originally designed to keep compromised hardware and telecom equipment out of military networks, can reach into the software layer and target AI providers. If the Pentagon’s designation survives legal challenge, it would establish a precedent that any AI company could be excluded from defense work through a process that allows the government to keep its reasoning largely secret.
That prospect matters well beyond Anthropic. Every major AI company training large language models relies on global data sets, complex cloud infrastructure, and upstream dependencies that could, in theory, be characterized as supply chain vulnerabilities. A ruling that validates the Pentagon’s approach without requiring more transparency could chill the willingness of AI firms to engage with defense customers at all, or push them to preemptively restructure in ways that satisfy national security reviewers but limit innovation.
Conversely, if the courts conclude the Defense Department cut corners, the outcome would not strip the government of its authority. It would simply force agencies to build documented, defensible cases before invoking it. For an industry watching nervously from the sidelines, that distinction matters: the question is not whether the government can act, but whether it must show its work.
As of late May 2026, the public record supports only a narrow set of firm conclusions. The Pentagon used a powerful and rarely deployed procurement tool against a domestic AI company in a way that has no clear precedent. Anthropic is challenging the move on statutory grounds, and a federal judge found enough merit in those arguments to pause enforcement. Everything else, including the security rationale and the long-term consequences for AI in national defense, remains locked behind the same secrecy provisions that made this fight possible in the first place.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.