The Pentagon just invited some of the biggest names in tech into its most secretive computing environments, and one notable company was told to stay home.
In May 2026, the U.S. Department of Defense announced it had signed agreements with seven technology firms to deploy frontier artificial intelligence on classified networks rated at Impact Level 6, the tier reserved for the military’s most sensitive national security data. The companies: Microsoft, Google, Amazon Web Services, Oracle, OpenAI, NVIDIA, SpaceX, and a lesser-known firm called Reflection. (The Pentagon’s announcement listed eight entities, though the headline figure of seven likely treats AWS as part of Amazon’s broader presence.)
Conspicuously absent was Anthropic, the AI safety company behind the Claude model family. Anthropic was excluded after refusing the Pentagon’s demand that its technology be available for “all lawful purposes” without company-imposed restrictions. The standoff has turned into the sharpest public collision yet between national security imperatives and the AI industry’s self-imposed safety commitments.
What the Pentagon announced
The Defense Department’s official release confirmed that the agreements cover integration of “frontier AI” into IL6 systems. For readers unfamiliar with the classification scale, IL6 is where the military keeps information whose exposure could directly compromise active operations. Think war plans, intelligence assessments, and real-time surveillance feeds. Deploying AI inside those networks means the models will process data that, until now, only cleared human analysts could touch.
The announcement specified that the technology would be used for “lawful operational use” and named all participating companies. It did not, however, disclose the financial value of the contracts, their duration, or the specific capabilities each company would provide. The Associated Press confirmed the announcement and noted the absence of dollar figures, a gap that makes it impossible for outside observers to judge whether this is a short-term pilot or a long-term commitment of undisclosed scale.
The deals did include some guardrails. According to the Washington Post, the agreements reference requirements for human operators to remain “in the loop” on sensitive decisions and contain assurances that the systems will not be used for unauthorized surveillance of U.S. persons. Those provisions suggest the Pentagon was willing to accept certain boundaries from companies that signed on. What remains unclear is whether those provisions carry enforcement mechanisms or function primarily as reassurance language.
Why Anthropic was left out
The conflict between Anthropic and the Defense Department had been building for months before the final announcement. At its core was a disagreement over who gets to decide how a powerful AI system is used once it enters a classified environment.
The Pentagon’s position, as characterized by defense officials in the Post’s reporting, was straightforward: company-imposed safety restrictions amount to a private veto over military decision-making, and that is unacceptable. “The Department will not outsource decisions about the lawful use of military tools to any private entity,” one senior defense official told the Washington Post, framing the demand as a matter of sovereign authority rather than corporate negotiation.
Anthropic refused. The company has built its reputation around what it calls its Responsible Scaling Policy, a framework that ties the deployment of increasingly powerful models to specific safety evaluations and usage restrictions. The guardrails at the center of the Pentagon dispute reportedly involved limits on autonomous weapons applications and protections against domestic surveillance overreach. An Anthropic spokesperson, responding to press inquiries after the announcement, said the company “remains committed to working with government partners in ways that are consistent with our safety framework,” without elaborating on the specifics of the failed negotiation.
Bloomberg reported in February 2026 on the negotiation breakdown in detail, identifying named officials and specific meetings where Pentagon representatives pressed Anthropic to drop those restrictions. Anthropic held firm. The talks collapsed, and the company was shut out of the final agreements even as every other invited firm accepted the Defense Department’s terms.
Anthropic has not released a detailed public account describing what, if any, middle-ground proposals were discussed. Whether the company will pursue alternative government work that fits within its safety framework, or whether this exclusion signals a longer-term distance from national security contracts, remains an open question.
What the deals do not answer
The public record on these agreements is thin, and several of the most consequential questions remain unanswered.
First, the phrase “lawful operational use” is broad enough to cover everything from logistics optimization and intelligence analysis to targeting support and autonomous surveillance. Without published contract text, the boundary between permitted and prohibited applications is defined entirely by internal Pentagon interpretation. That ambiguity frustrates watchdog groups, lawmakers, and employees inside the participating companies who want to understand what their technology may ultimately do.
Second, no independent technical evaluation of the AI models being deployed has been made public. Questions about bias, reliability, hallucination rates, and escalation risk in high-stakes military contexts are not hypothetical concerns. They are engineering problems that require rigorous testing, and there is no indication that any such assessment has been shared with Congress or outside reviewers.
Third, the internal dynamics at participating companies are opaque. Reporting has confirmed employee pushback at firms like Microsoft and Google, both of which have faced internal dissent over military contracts before. But no on-the-record statements from workers, ethics boards, or union representatives have surfaced. Whether those companies negotiated their own safety conditions or simply accepted the Pentagon’s terms wholesale is not established.
Finally, congressional oversight of these deals is undefined. There is no public indication that any oversight committee has reviewed the full terms, requested briefings, or commissioned independent analysis. The Pentagon’s 2020 AI ethics principles, which called for AI systems to be “responsible, equitable, traceable, reliable, and governable,” remain the closest thing to a public framework, but those principles are aspirational guidelines, not enforceable rules. How they interact with the new contract language is anyone’s guess.
A rift over who controls the guardrails
Inside the fluorescent-lit conference rooms where these negotiations played out, two incompatible visions of AI governance collided. On one side, Pentagon officials who view any external constraint on military tools as an unacceptable concession of authority. On the other, a company that staked its identity on the premise that some uses of powerful AI should be off-limits, regardless of who is asking.
For the seven companies now inside the Pentagon’s classified AI infrastructure, these deals represent a significant commercial win and a foothold in high-security government computing, though the precise financial and strategic value remains impossible to assess without disclosed contract terms. For Anthropic, the outcome is a costly bet that its safety-first identity is worth more than a seat at the Pentagon’s table. That bet may pay off in the commercial market, where enterprise customers increasingly value safety credentials. Or it may leave the company locked out of the national security sector as competitors entrench themselves.
For the public, the episode crystallizes a tension that will only intensify as AI systems grow more capable. The military wants unrestricted access to the most powerful technology available. Some of the companies building that technology believe certain uses should be off-limits, even for a government client. And the mechanisms that might mediate between those positions, including congressional oversight, independent audits, and enforceable ethical standards, are either absent or untested.
Until more documentation surfaces, assessments of these agreements will rest on a narrow base of evidence. But the decisions they authorize are already shaping how AI operates in some of the most sensitive corners of American power, and the debate over who controls those boundaries is just getting started.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.