Morning Overview

Judge blocks Pentagon from labeling Anthropic a supply chain risk

A federal judge in California has blocked the Pentagon from enforcing a supply chain risk designation against artificial intelligence company Anthropic, halting a rarely used government authority that could have barred the firm from defense contracts. The preliminary injunction, issued on March 26, 2026, marks an early legal win for Anthropic and raises pointed questions about how the Trump administration has wielded procurement exclusion powers against an American AI developer.

What the Court Ordered

U.S. District Judge Rita F. Lin granted Anthropic’s motion for a preliminary injunction in the case Anthropic PBC v. U.S. Department of War et al., No. 3:26-cv-01996-RFL, filed in the Northern District of California. The order prevents the Pentagon from enforcing or maintaining the supply chain risk label against Anthropic while the case proceeds. That label, once applied, effectively walls off a company from competing for defense work, making the injunction a significant shield for Anthropic’s government business prospects.

Two days before issuing the injunction, Judge Lin openly questioned the Pentagon’s motives during a hearing. She said she was focusing on whether the Trump administration acted improperly by applying what she called a “scarlet letter” on Anthropic, according to an Associated Press account. That language signals the court found enough procedural concern to justify freezing the designation before a full trial.

The Statute Behind the Label

The Pentagon’s authority to exclude vendors rests on 10 U.S.C. Section 3252, a statute titled “Exclusion of sources to reduce supply chain risk.” The law allows the Department of Defense to bar companies from procurement when their products or services are deemed to pose risks to the defense supply chain. It was designed primarily with foreign adversaries and compromised hardware in mind, not domestically headquartered AI firms.

Congress created and later refined this authority through defense authorization measures recorded in federal legislative history, and its current text appears in official compilations of national security statutes maintained on government collections sites. The operative provisions give the Pentagon broad discretion to act quickly when it perceives a threat, but they also require written findings and coordination with other agencies, as reflected in the statutory materials available through official repositories.

Applying this statute to Anthropic represents a notable expansion of its use. The Associated Press described the authority as rarely invoked, and Judge Lin’s concerns about arbitrariness suggest the government may have stretched the statute beyond its intended scope. The law sets procedural requirements for exclusion decisions, and the court’s willingness to intervene implies those requirements may not have been met. Without access to the full classified or internal record the Pentagon relied on, the exact evidentiary basis for the designation remains unclear from public filings.

That gap matters. If the government cannot demonstrate a reasoned, evidence-backed rationale for branding Anthropic a supply chain threat, the designation looks less like a security measure and more like a punitive action. The judge’s “scarlet letter” framing reinforces that reading and hints that the court may ultimately scrutinize whether the Pentagon followed the administrative law standards that normally govern agency decision-making.

Microsoft and Military Leaders Join the Fight

Anthropic did not fight alone. Microsoft and retired military chiefs filed amicus briefs supporting the AI company’s challenge, according to Associated Press reporting on the case. Their involvement carries weight because both represent constituencies the Pentagon depends on: a major defense technology contractor and former senior uniformed leaders who shaped military procurement strategy.

The amicus filings reportedly warned that the designation could damage the broader defense technology ecosystem. That argument has teeth. If the government can label a leading AI company a supply chain risk without transparent justification, other technology firms weighing whether to pursue defense contracts may decide the regulatory risk is not worth the effort. The defense sector already struggles to attract top-tier commercial technology companies, and arbitrary use of exclusion powers could widen that gap by signaling that access to defense work can be cut off abruptly and without clear recourse.

Microsoft’s decision to back Anthropic is particularly telling. The two companies are competitors in the AI space, yet Microsoft apparently concluded that the precedent set by the Pentagon’s action posed a greater threat to its own interests than any competitive advantage gained from Anthropic’s exclusion. That calculus suggests major industry players see the case as a bellwether for how aggressively the government might police AI vendors under supply chain authorities that were originally crafted for different kinds of risks.

Why the “Department of War” Label Matters

One detail in the court filings deserves attention: the defendant is listed as the “U.S. Department of War,” not the Department of Defense. Some reporting has referred to the case as a dispute with the Department of Defense, and both descriptions appear in coverage of the litigation. The distinction may reflect a recent renaming or reorganization. Secretary of War Pete Hegseth has been referenced in connection with the statutory authority at issue, per official statutory records. Regardless of nomenclature, the practical effect is the same: the court has restrained the military establishment from acting on the Anthropic designation.

The unusual caption also underscores how much of this dispute turns on institutional power rather than a single contract. The Department of War, as framed in the complaint and docket, sits at the center of a web of authorities that span procurement rules, national security directives, and codified statutes. Those authorities, collected across titles of the U.S. Code and mirrored in authenticated PDFs on federal document portals, give the department enormous leverage over which companies can meaningfully compete in defense markets.

What This Injunction Changes Right Now

For Anthropic, the injunction removes an immediate commercial threat. A supply chain risk label does not just block a single contract; it can trigger a cascade of exclusions across defense procurement channels and discourage potential partners and customers in adjacent government markets. With the label frozen, Anthropic can continue pursuing federal work without the stigma of an active designation hanging over its proposals, preserving both revenue opportunities and its reputation as a viable national security partner.

For the Pentagon, the ruling forces a reckoning with how it applies Section 3252. The statute grants broad authority, but that authority is not unlimited. Courts generally defer to national security judgments, so the fact that Judge Lin found enough reason to issue a preliminary injunction, and signaled concerns about arbitrary and capricious government action, suggests the government’s case may rest on thin procedural ground. If the Pentagon wants to use supply chain exclusion tools against AI companies, it will likely need to build a far more detailed record than it apparently assembled here, documenting concrete risks and demonstrating that less drastic measures would be inadequate.

The injunction also buys time for Congress and oversight bodies to assess whether the statute is being used as intended. Lawmakers tracking defense acquisition and technology policy can consult the same statutory text on official code sites that Judge Lin cited, and, if they conclude the authority has been stretched too far, they could narrow or clarify it through future legislation. Any such changes would move through the regular process documented on congressional dockets, where debates over AI, national security, and industrial policy are already intensifying.

What Comes Next

The case now proceeds to fuller briefing on the merits, where the government will have to defend both its interpretation of Section 3252 and the specific steps it took to brand Anthropic a supply chain risk. That phase may involve classified submissions and in camera review, limiting what the public can see. But the preliminary injunction has already shifted the balance: the court has signaled that deference to national security claims is not automatic when a domestic technology company alleges that opaque processes are being used to cut it off from a critical market.

Beyond Anthropic, the dispute highlights a broader tension in U.S. defense policy. The government wants rapid access to cutting-edge AI while also guarding against vulnerabilities, foreign influence, and misuse. Tools like supply chain exclusion authorities are one way to manage that risk, but when applied without clear standards or transparency, they can deter exactly the companies the Pentagon most wants to attract. How this case is resolved will help determine whether those tools are seen as narrowly tailored safeguards or as blunt instruments that chill innovation.

For now, Judge Lin’s order ensures that Anthropic will not carry the “scarlet letter” of a supply chain risk designation while it argues its case. The ultimate question is whether the court will conclude that the Department of War followed the law when it invoked a powerful, rarely used authority against a domestic AI firm, or whether this episode becomes a catalyst for rethinking how national security concerns are balanced against the need for a robust, competitive defense technology base.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.