Morning Overview

Pentagon signs classified AI deals with 7 tech giants — and freezes out the one company that said no

The Pentagon has locked in classified artificial intelligence contracts with seven of the largest technology companies in the United States, granting them access to some of the military’s most restricted networks. Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection AI, and SpaceX will deploy what the Department of War calls “frontier AI capabilities” on systems rated at Impact Level 6 and Impact Level 7, classification tiers reserved for data so sensitive it includes intelligence products and active warfighting plans.

One prominent AI company is missing from that list: Anthropic, the San Francisco-based maker of the Claude AI model. And its absence is not an oversight. It is the result of a dispute over safety and ethics that has now spilled into federal court, triggered a judicial injunction, and exposed a sharp divide between Pentagon leadership and the military personnel who already depend on Anthropic’s tools every day.

Seven companies in, one frozen out

The Department of War’s official announcement names all seven firms and specifies that their AI tools will operate inside classified network environments where the military handles its most guarded information. Impact Level 6 covers data classified as Secret that requires enhanced security controls; Impact Level 7 applies to Top Secret and Sensitive Compartmented Information. Deploying commercial AI inside those environments represents a significant expansion of how the military uses generative technology for intelligence analysis, operational planning, and administrative work behind secure doors.

The same release noted that GenAI.mil, the Pentagon’s broader AI platform, has now reached 1.3 million users across the defense workforce, a figure that underscores how quickly generative AI has moved from pilot programs to daily use across combatant commands, intelligence agencies, and support offices.

Among the seven, most names are familiar defense and cloud contractors. But two stand out. Reflection AI is a lesser-known firm whose inclusion signals the Pentagon’s willingness to work with emerging players. SpaceX, best known for launch vehicles and satellite internet, is now positioned inside the military’s AI infrastructure as well, though the Department of War has not detailed what specific AI capabilities the company will provide on classified systems.

Why Anthropic was excluded

Anthropic is structured as a Public Benefit Corporation, a legal designation that binds the company to weigh public safety alongside profit. It has built its reputation around what it calls a Responsible Scaling Policy, a framework that sets internal thresholds for when and how its AI models can be deployed in high-stakes settings. That posture has made Anthropic a favorite among researchers and policymakers who worry about the unchecked spread of powerful AI systems. It has also, evidently, put the company on a collision course with a Pentagon that wants frontier AI deployed fast and without conditions it views as restrictive.

Associated Press reporting confirmed that the dispute centers on ethics and safety disagreements over military applications of Anthropic’s technology. The specific scenarios or weapon-adjacent uses that Anthropic objected to have not been disclosed publicly, but the company’s exclusion from the classified deals makes clear that the disagreement was substantive enough to end negotiations.

Defense Secretary Pete Hegseth has gone further. Reuters reported that Hegseth has pushed to remove Claude from Pentagon systems entirely, not just from classified networks but from the broader defense IT ecosystem. That directive, however, has run into resistance from military users who told Reuters that replacing Claude is far more complicated than leadership acknowledges. Analysts, planners, and support staff have built workflows, custom interfaces, and analytic dashboards around the model. Ripping it out would mean rebuilding tools that are already woven into daily operations.

The lawsuit and the injunction

The confrontation escalated when the Pentagon designated Anthropic a supply chain risk, a label that carries severe consequences. Under federal procurement rules, a supply chain risk designation can effectively bar a company from doing business with any part of the U.S. government, not just the Department of War.

Anthropic responded by filing suit. In the case docketed as Anthropic PBC v. U.S. Department of War, the company challenged both the process behind the designation and its sweeping implications. A federal judge agreed that the situation warranted immediate intervention and issued a temporary injunction blocking the Department of War from enforcing the supply chain risk label while the litigation proceeds.

The injunction keeps Anthropic’s federal business alive for now, but the order is temporary. The judge’s full reasoning has not appeared in publicly accessible transcripts as of June 2026, leaving open the question of whether the court views the Pentagon’s action as procedurally flawed, substantively excessive, or both. If the designation is ultimately upheld, Anthropic could be locked out of the federal market entirely. If the injunction becomes permanent, the Pentagon will need a different legal rationale for sidelining the company, or it will have to negotiate terms both sides can accept.

The gap between the directive and the ground

What makes this dispute more than a contracting spat is the distance between what Pentagon leadership wants and what military personnel actually use. The 1.3 million GenAI.mil users represent a workforce that has adopted generative AI tools at remarkable speed. Some portion of those users rely on Claude, though neither the Pentagon nor Anthropic has disclosed how usage breaks down across models from Anthropic, OpenAI, Google, and Microsoft.

Military users who spoke to Reuters described Claude as deeply embedded in their work. No official assessment has quantified the switching costs or operational disruption that would follow if Claude access disappeared overnight, but the pattern is consistent with how large organizations struggle to swap out software tools once they become part of institutional muscle memory. Mission-planning aids, intelligence-analysis pipelines, and routine administrative automation would all need to be rebuilt or migrated to a different model, a process that carries both cost and risk.

The Pentagon has not addressed this tension publicly. Its announcement focused on the seven new classified agreements and the scale of GenAI.mil adoption, not on the practical consequences of removing a tool that parts of the defense workforce have come to depend on.

What the public still does not know

For all the attention this story has drawn, significant gaps remain. The Department of War has not released projected timelines for when the seven companies will have their AI tools operational on classified networks, nor has it disclosed cost estimates for the agreements. Without those figures, taxpayers and oversight bodies cannot gauge how much money is committed or how quickly the military expects to gain new capability.

No independent technical assessment has evaluated whether the seven approved providers offer safety guardrails comparable to what Anthropic built into Claude. No congressional testimony or inspector general review has examined whether excluding a company known for cautious AI deployment creates downstream risk on classified systems where errors could have serious national security consequences.

The seven companies that signed on now hold privileged access to some of the government’s most sensitive computing environments. Their tools are poised to shape how analysts process intelligence, how planners model scenarios, and how support staff handle routine work inside secure facilities. Yet the safeguards, human oversight mechanisms, and fail-safe procedures governing these deployments remain undisclosed.

A test case with no clear precedent

The Anthropic dispute is shaping up as the first major legal and policy test of what happens when a frontier AI company tells the Pentagon no. The military is scaling AI adoption at a pace that would have been unimaginable five years ago, pushing commercial models into environments where the stakes include intelligence operations and combat planning. At the same time, it is moving to cut out the one major AI firm that built its identity around the idea that some uses of powerful technology require restraint.

Whether the courts side with Anthropic or the Pentagon, the outcome will set a marker for every AI company weighing whether to pursue defense contracts and on what terms. For the military, the question is whether speed of adoption and breadth of access matter more than keeping a safety-focused supplier in the mix. For Anthropic, the question is whether principled refusal carries a price the company can survive. The next round of court filings may begin to answer both.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.