The Pentagon has formally designated Anthropic, the company behind the Claude AI model family, as a supply chain risk, while the General Services Administration has restored the company to full standing on federal procurement platforms under a court order. Separately, unverified reports suggest the National Security Agency may be using Anthropic’s technology for intelligence work, though no official documentation or agency statement has confirmed that claim. The result is an extraordinary and unresolved split inside the federal government over whether Anthropic’s AI belongs in sensitive national security work.
Two branches of the same government have reached opposite conclusions. The Department of Defense designated Anthropic and its products as a supply chain risk, effective immediately, citing the principle that the military must be able to use technology for “all lawful purposes,” according to reporting from the Associated Press. Meanwhile, the General Services Administration, which manages technology purchasing for civilian agencies, announced on April 3, 2026, that it was restoring Anthropic to full standing on federal procurement platforms after a court issued a preliminary injunction blocking earlier restrictions.
The result: one part of the government treats Anthropic as a threat to the defense supply chain, another part has been ordered by a federal judge to make the company’s tools available again, and a third, the NSA, may already be putting those tools to work, though that claim remains unverified.
Why the Pentagon moved against Anthropic
The friction traces back to Anthropic’s acceptable use policy, which restricts how its AI models, including the Claude family, can be deployed in military, weapons development, and mass surveillance contexts. The Pentagon’s supply chain risk designation signals that defense officials view those restrictions as incompatible with military needs.
Under Title 10 U.S. Code Section 3252, the Defense Department has authority to flag vendors whose products or practices pose risks to the acquisition process. That mechanism is typically aimed at foreign adversaries or compromised hardware suppliers. Turning it against a major American AI company over a policy disagreement, rather than a technical vulnerability or foreign entanglement, is a striking escalation. It effectively warns defense contracting officers that purchasing Anthropic products could expose their programs to legal and compliance risk.
The Pentagon’s language about “all lawful purposes” suggests the dispute is not about whether Anthropic’s technology works, but about who gets to set the boundaries on how it is used. Defense leaders appear unwilling to accept a commercial vendor’s self-imposed limits on military applications, particularly as AI becomes central to intelligence analysis, logistics, and operational planning.
How a federal court forced the GSA’s hand
The GSA’s reversal did not come voluntarily. In its April 3 statement, the agency said it was withdrawing “prior actions and announcements regarding Anthropic technology” and returning the company to the status it held before February 27, 2026, the date restrictions were first imposed. The GSA explicitly cited a preliminary injunction as the reason.
A preliminary injunction is not a final ruling, but courts grant them only when the party seeking relief demonstrates a likelihood of success on the merits. That standard means a federal judge found initial reason to believe the restrictions on Anthropic were legally flawed, whether on procedural, statutory, or constitutional grounds.
Key details remain sealed or unpublished. The full text of the injunction, the identity of the court, and whether Anthropic itself or a third party brought the challenge have not appeared in the public record as of May 2026. Those gaps matter because the scope of the order will determine how durable the GSA’s restoration turns out to be. Preliminary injunctions can be narrowed, expanded, or dissolved as litigation proceeds.
Unverified reports of NSA use
Reports have circulated describing the NSA’s use of Anthropic’s AI technology, but no underlying documentation supports the claim in the public record. No contract awards, task orders, agency memoranda, or official NSA statements confirm that the agency is deploying a specific Anthropic model for intelligence work. The AP article linked above covers the Pentagon’s supply chain designation and does not independently verify NSA use of Anthropic products. Until primary evidence emerges, the claim should be treated as plausible but unconfirmed.
That gap matters for a practical reason: it is unclear whether the NSA would even be bound by the Pentagon’s supply chain designation. Intelligence agencies often operate under separate legal authorities and budget lines from the rest of the Defense Department. Some intelligence programs use DoD contracting channels, which would bring them under the supply chain risk framework. Others run through distinct procurement vehicles tied to the intelligence community’s own acquisition regulations and oversight structures.
If the NSA is using Anthropic’s tools through a channel not covered by the Pentagon’s designation, it may face no formal barrier. But the optics of one intelligence agency adopting technology that the military’s own leadership has flagged as a supply chain risk would almost certainly draw scrutiny from congressional oversight committees and inspectors general.
What this means for other agencies
Federal contracting officers across the government now face competing signals. The GSA’s restoration means Anthropic products are technically available for purchase through major civilian procurement vehicles. The Pentagon’s designation means defense-side buyers are on notice that using those same products could trigger compliance problems.
Agencies retain discretion to conduct their own security and risk assessments. Some may have independently paused Anthropic deployments or launched internal reviews that remain in place regardless of the GSA’s reversal. Others may treat the court-ordered restoration as a green light. The lack of unified guidance from the White House or the Office of Management and Budget leaves each agency to navigate the contradiction on its own.
The dispute also sends a signal to Anthropic’s competitors. Companies like OpenAI and Google have taken different approaches to military partnerships, with OpenAI dropping a blanket prohibition on military use in early 2024 and Google expanding its defense contracts. If the Pentagon’s designation holds, it could push agencies toward vendors with fewer restrictions on government use, reshaping the competitive landscape for federal AI contracts worth billions of dollars over the coming decade.
Competing federal signals with no resolution in sight
As of May 2026, the Pentagon has not withdrawn its supply chain risk finding. The GSA has not challenged the court order. And the unverified reports of NSA use of Anthropic technology sit in a gray zone between the two positions, neither clearly authorized nor clearly prohibited by the conflicting guidance.
No public statement from Anthropic, its CEO Dario Amodei, or any congressional oversight figure has addressed the standoff directly. The company’s acceptable use policy remains in place, but its full internal response to the Pentagon designation, any negotiations that preceded it, and any concessions either side may have offered are not part of the public record.
What is clear is that the federal government has not figured out how to handle an AI company that builds powerful tools but insists on limiting how they are used. The Pentagon wants unrestricted access. Anthropic wants guardrails. A federal court has, for now, sided with keeping the technology available. Until the injunction is resolved, the Pentagon’s designation is tested in court, or Congress steps in with clearer rules, agencies will be left reading two sets of instructions that point in opposite directions.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.