Morning Overview

Trump push to blacklist Anthropic clashes with Pentagon interest in Claude

When the Trump administration moved to cut the Pentagon off from one of the most capable AI systems built by an American company, it reached for a legal tool originally forged to keep Chinese surveillance chips out of military hardware. A federal judge said that was the wrong tool for the job. Now an appeals court will decide who is right, and the outcome could reshape how the U.S. military buys artificial intelligence.

The dispute centers on Anthropic’s Claude AI model, which multiple Defense Department units have integrated into workflows ranging from intelligence analysis to logistics planning. In early 2026, the administration issued a directive through the Pentagon to halt agency use of Claude, invoking 10 U.S. Code Section 3252, a supply chain risk statute that empowers the Defense Department to exclude vendors whose products could compromise military systems. The law was written with foreign adversaries in mind, and it has historically been applied to companies like Huawei and Kaspersky Lab.

Anthropic is headquartered in San Francisco.

The ruling and the appeal

U.S. District Judge Rita Lin blocked enforcement of the directive in late March, finding that the administration had stretched a narrowly scoped procurement authority well beyond its intended purpose. In her ruling, Lin wrote that Section 3252 establishes procedures the Pentagon must follow before excluding a vendor on supply chain grounds, and that those procedures contemplate threats from compromised foreign components, not policy disagreements with domestic technology firms.

The administration filed its appeal to the Ninth Circuit in early April. The central legal question is whether the statute’s definitions of “covered systems” and “covered items of supply” can be read broadly enough to encompass a domestically developed AI model. A companion statute, 41 U.S. Code Section 4713, provides broader federal procurement authorities for mitigating supply chain risks, but the administration chose the narrower Pentagon-specific provision, a choice Judge Lin found telling.

No timeline for oral arguments before the Ninth Circuit has been publicly announced as of late April 2026.

Anthropic’s technical defense

In court filings and appeals court hearings, Anthropic’s lawyers made a pointed technical argument: once Claude is deployed on the Pentagon’s air-gapped classified networks, the company has no ability to remotely access, manipulate, or disable the model. If true, that claim guts the supply chain risk theory at its foundation. A supply chain threat typically involves a vendor that retains some form of hidden access or control over deployed technology. Anthropic says that pathway simply does not exist in its architecture.

These are on-the-record legal representations, and attorneys face professional sanctions for misrepresenting facts to a federal court. But they remain assertions by an interested party. The Pentagon has not publicly released a technical counter-assessment, and no independent review of Claude’s deployment architecture on classified systems has surfaced in the court record.

Anthropic has also pointed to an underlying contract dispute over how Claude can be used within the Defense Department, suggesting the blacklist effort may be rooted in commercial disagreements rather than genuine security findings. The full text of the contract has not been made public.

What the administration has not explained

The evidentiary gap on the government’s side is significant. The primary DoD assessment documents that would detail the specific supply chain risk evaluation for Anthropic under Section 3252 remain classified or otherwise withheld. Court proceedings reference these determinations, but the underlying analysis, including what evidence the Pentagon relied on to classify a domestic AI firm as a supply chain threat, is unavailable for independent review.

No Trump administration official has gone on the record to explain the basis for the directive. The available record consists of judicial summaries and Anthropic’s own filings. That silence leaves open a question that has circulated widely in Washington’s defense technology circles: whether the push to blacklist Anthropic reflects a genuine national security judgment or something more political.

Anthropic’s CEO, Dario Amodei, has been one of the most vocal executives in Silicon Valley on the need for AI safety regulation, a position that has put the company at odds with the administration’s deregulatory stance. Several defense technology analysts have noted that Anthropic’s public advocacy for safety guardrails may have made it a target, though no direct evidence linking the company’s policy positions to the blacklist decision has emerged in court.

The competitive backdrop

The case does not exist in a vacuum. The AI industry’s biggest players are competing aggressively for Pentagon contracts worth billions of dollars over the coming decade. Anthropic, backed by a multibillion-dollar investment from Amazon, has positioned Claude as a leading model for enterprise and government use. Rivals including OpenAI, Google DeepMind, and Elon Musk’s xAI are pursuing the same market. Blacklisting one competitor on supply chain grounds, using a statute built for foreign threats, raises obvious questions about whether procurement decisions are being shaped by technical merit or political relationships.

Defense contractors and AI companies watching this case are asking a practical question: if a company’s technology can be reclassified as a supply chain risk based on a policy disagreement rather than a documented technical vulnerability, what does that mean for every other American firm considering defense work? The risk calculus for entering the military AI market shifts dramatically if political exposure matters as much as technical performance.

What the Ninth Circuit will signal

The appeals court’s ruling will do more than resolve a contract dispute. It will establish whether Cold War-era procurement authorities can be repurposed to exclude domestic technology companies without the kind of documented, adversary-focused threat assessment those statutes were designed to require. A ruling in the administration’s favor would give the executive branch broad discretion to sideline American AI firms from defense work. A ruling for Anthropic would reinforce the statutory limits on that power and likely force the Pentagon to articulate a far more specific justification before blacklisting any domestic vendor.

Either way, the case has already changed the conversation. For years, the biggest barrier to selling AI to the military was clearing the Pentagon’s technical and security requirements. Now companies must also weigh whether building the best model is enough, or whether staying in the government’s good graces matters just as much.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.