The National Security Agency used an artificial intelligence model built by Anthropic even after the Department of Defense labeled the company a supply chain risk, Axios reported in April 2026. The revelation that one arm of the U.S. government quietly deployed a tool another arm had effectively blacklisted has thrown a spotlight on a deepening fracture inside the national security establishment over who gets to decide which AI is safe enough for classified work.
The dispute has already landed in federal court, drawn a sharp rebuke from a U.S. senator, and raised uncomfortable questions about whether Washington’s procurement machinery can keep pace with the technology it is scrambling to adopt.
The Pentagon’s designation and the court’s pushback
Two hard facts anchor the story. The DOD designated Anthropic a supply chain risk, a classification that can block a company from defense contracts and warn other agencies that its products pose security concerns. In April 2026, a federal judge temporarily blocked the Pentagon from enforcing the label, finding that the government’s action lacked proper procedure.
The court’s order carried a pointed message: branding a domestic AI firm as a threat without transparent evidence risks chilling the very innovation the United States says it needs to stay ahead of China. A judge does not issue that kind of injunction without concluding the challenger has a serious case.
On Capitol Hill, Sen. Ed Markey (D-Mass.) went further. In a formal press release, Markey demanded immediate congressional action to reverse the designation, calling the DOD’s decision “retaliatory.” His statement also referenced contract terminations alongside the supply chain label, suggesting the fight extends well beyond a bureaucratic classification into the financial relationship between Anthropic and the defense establishment. Markey’s office did not specify what triggered the alleged retaliation.
What the Axios report claims
According to Axios, the NSA used Anthropic’s Mythos model while the Pentagon’s blacklist was active. Mythos is part of Anthropic’s family of large language models, though the company has shared few public details about its capabilities or how it differs from the firm’s commercial Claude line. The Axios report, citing unnamed officials, did not identify the contract vehicle the NSA used to obtain the model or the specific mission it supported.
No official NSA statement, procurement record, or inspector general report in the public record corroborates the claim. That does not make it false; investigative reporting built on anonymous sources has a long track record of accuracy in national security journalism. But readers should understand the claim rests on Axios’s sourcing rather than on verified government documents.
The jurisdictional question matters here. The NSA operates under the Director of National Intelligence, not the Secretary of Defense, and intelligence community acquisition authorities run on a separate track from DOD procurement rules. Whether a Pentagon supply chain designation legally binds agencies outside its direct chain of command is an unsettled question that no source in the current reporting has answered clearly.
Competing signals from competing institutions
Taken together, the available evidence paints a picture of institutional disagreement at every level. The Pentagon moved to restrict Anthropic. A federal court called that move procedurally suspect. A sitting senator labeled it retaliatory. And a major news outlet reported that a separate intelligence agency ignored the restriction entirely.
Each data point tells a limited story on its own. Together, they suggest the U.S. government has not built a unified framework for deciding which AI companies can serve national security functions and under what conditions.
Anthropic itself has not commented publicly, based on available reporting, on whether it holds active intelligence community contracts or what safeguards govern government deployments of its models. The company’s legal challenge to the DOD designation is ongoing, but the substance of its filings beyond the procedural arguments noted in press coverage remains under seal. That leaves a gap between the courtroom fight over process and the substantive question of whether Anthropic’s tools actually pose the risks the Pentagon implied.
The Pentagon, for its part, has not publicly detailed the specific vulnerabilities it identified. Without that underlying risk assessment, outside observers have no way to judge whether the designation reflected genuine security concerns or, as Markey’s office suggested, an act of institutional score-settling tied to a contract dispute.
What the Anthropic dispute signals for AI procurement
The fallout extends well beyond one company. If one agency can blacklist an AI vendor while another quietly keeps using its products, every firm building tools for government work receives contradictory signals about what standards it must meet. That uncertainty can push companies toward commercial markets where the rules are clearer, even if the stakes are lower, draining talent and investment away from national security applications.
The episode also highlights a tension the government has not resolved: how to balance supply chain caution with the need to maintain a domestic AI ecosystem capable of competing globally. Overly broad or poorly justified designations risk sidelining American companies while adversaries accelerate their own military AI programs.
For policymakers, the Anthropic case makes the argument for a more coordinated framework spanning the Defense Department, the intelligence community, and civilian agencies. Clearer criteria for designating vendors as risks, standardized appeal processes, and explicit rules about how such labels travel across jurisdictional lines could prevent the kind of cross-agency divergence alleged here.
Until more documents surface or agencies offer on-the-record explanations, the dispute will remain an early and messy case study in what happens when the rush to deploy powerful AI systems collides with older, slower processes for managing supply chain risk. Those processes are now being tested in courtrooms and debated in Congress, even as intelligence agencies quietly decide for themselves which tools they trust enough to put to work.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.