Morning Overview

Pentagon seeks to replace Anthropic AI models after supply-chain rift

The Pentagon is moving to replace Anthropic’s Claude artificial intelligence models across its operations after labeling the company a supply chain risk, according to officials and court records. The decision follows a breakdown in talks over how the military can use Anthropic’s technology and arrives as the government had been expanding access to the same tools across federal agencies. The clash now threatens to reshape how emerging AI firms work with national security customers.

The decision to move away from Anthropic

The Pentagon is developing alternatives to Anthropic PBC’s AI tools after deciding to move away from Claude models, according to an official account of the shift. That planning is framed as a direct response to an intensifying feud with the company over military use of its systems. The move signals that defense leaders are prepared to retool sensitive software pipelines rather than accept the firm’s conditions on how its models are deployed.

The Pentagon has stated that Anthropic and its products are deemed a supply chain risk “effective immediately,” according to a public statement cited by Pentagon officials. That formal designation gives the department a powerful tool to limit where and how the company’s technology can appear inside defense contracts, even when the government is not buying directly from Anthropic.

What the supply chain label actually does

The “supply chain risk” label is described as effective immediately and bars government contractors from using Anthropic technology in United States military work, according to a detailed account of the designation from defense procurement officials. That means a prime contractor building software for the Pentagon cannot quietly integrate Claude models as a component, even through a subcontractor, without running afoul of the order.

On the timing, there is a discrepancy between sources. One account states that the Pentagon officially notified Anthropic that it is a “Supply Chain Risk” on March 5, 2026, according to a description of the notice reported by technology-focused coverage. Another account states that the Pentagon designates Anthropic a supply chain risk on March 6, 2026, according to the procurement-focused description linked above. The available reporting does not resolve whether the internal notification and the public designation occurred on different days.

A feud rooted in how AI can be used in war

Behind the designation is a dispute over how much control Anthropic should retain once its AI is in military environments. Hegseth has warned Anthropic to let the military use the company’s AI technology as it sees fit, according to people briefed on those discussions. That warning included an ultimatum-style deadline for the company to relax restrictions and a threat to use tools such as the supply chain designation and the Defense Production Act if it did not.

Those accounts describe a meeting identified as a key turning point, where officials signaled they were prepared to treat the company not just as a vendor but as a potential obstacle to military planning. The Pentagon’s later use of the supply chain mechanism suggests leaders followed through on that threat once it became clear Anthropic would not grant the level of freedom they sought for battlefield or intelligence applications.

Anthropic’s legal counterattack

Anthropic has said it will sue the Defense Department over the supply chain designation, which the company has described as not legally sound, according to an account of its position reported by people familiar with the company’s plans. That same account notes that the designation could prevent the start-up from doing business with the government at all, a far broader impact than the Pentagon’s own contracts.

The case titled Anthropic PBC v. U.S. Department of War et al. exists in the United States District Court for the Northern District of California, according to the official docket summary. Court records show that the case includes a complaint and motions for temporary restraining orders and preliminary injunctions, which indicates that Anthropic is seeking swift judicial review of the Pentagon’s move while broader arguments play out.

Support from Microsoft and retired military leaders

Anthropic is not alone in challenging the Pentagon’s use of supply chain powers. Microsoft backs Anthropic in its court fight against the Pentagon, according to a description of third-party filings reported by people familiar with those briefs. Retired military chiefs also provide support to Anthropic via court filings, according to the same account, which describes them as arguing that the designation harms national security innovation rather than protecting it.

Those filings argue about the appropriateness and effects of using a supply chain risk mechanism in what they frame as a contract dispute, according to that description. Their position suggests concern that if the Pentagon deploys security tools as leverage in negotiations over usage rights, other technology suppliers may hesitate to work with defense agencies unless they surrender significant control up front.

A $1 governmentwide deal now in question

The Pentagon’s move collides with a separate effort to make Anthropic’s AI widely available across the federal government. The General Services Administration struck a OneGov deal with Anthropic that offers Claude AI to all branches of government for just $1, according to an official GSA announcement. That agreement is described as a governmentwide procurement vehicle for Anthropic and outlines “Claude for Government” availability for agencies that want to experiment with or deploy the tools.

The OneGov structure means that agencies outside the Defense Department, from civilian regulators to small commissions, can tap into the same contract vehicle. Procurement resources such as sam.gov and acquisition.gov, along with small business guidance from SBA contracting guides, explain how such governmentwide vehicles allow agencies to bypass lengthy individual competitions.

The Pentagon’s supply chain designation does not automatically rewrite that GSA deal, but it raises practical questions for any defense-related program that might have planned to use Claude through the OneGov vehicle. It also creates a split environment where civilian agencies retain a simple path to Anthropic tools while military programs are effectively walled off.

How the Pentagon plans to replace Claude

Officials say the Pentagon is developing alternatives to Anthropic’s tools, according to the account that describes it as moving to replace the company amid an AI feud linked above. That suggests an internal review of other large language models and AI providers, potentially including systems already vetted for security and export controls. The mechanics are not public, but the scale of defense software integration means any replacement will require significant testing and integration work.

For contractors, the bar on using Anthropic technology in U.S. military work, as described by procurement officials, forces a review of existing code bases and supplier lists. A firm that had quietly plugged Claude into a logistics dashboard or an intelligence analysis tool must now either strip out that integration or risk losing eligibility for future awards. That kind of retrofit can be costly and time consuming, especially when tied to classified systems.

Why this fight matters beyond one company

The clash between the Pentagon and Anthropic is already shaping how AI companies think about defense work. The warning from Hegseth, described by people briefed on the ultimatum, suggests that the department expects suppliers to allow broad military discretion over how tools are used, including in combat support roles. Anthropic’s pushback, and its decision to challenge the supply chain label in court, signals that some AI developers want contractual and ethical guardrails even when selling to national security customers.

Court filings from Microsoft and retired military leaders, described by people familiar with those briefs, frame the supply chain designation as a test of whether powerful procurement tools can be used as pressure in commercial disputes. Their argument suggests that if the Pentagon can swiftly cut off a supplier’s access to government work across the board, companies may demand higher prices or avoid federal deals entirely to offset that risk.

At the same time, the OneGov agreement highlighted by GSA shows that other parts of the government see value in low-cost, wide access to Claude for Government. That contrast between enthusiastic civilian adoption and hardening military skepticism captures the broader tension: the United States wants to harness advanced AI for public service and defense, but has not yet settled on shared rules for how much control private developers keep once their systems enter the national security arena.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.