The Pentagon has activated agreements allowing seven major technology companies to run artificial intelligence tools on its classified military networks, a milestone that places frontier AI models inside some of the most sensitive systems in the U.S. government. The companies now cleared for what the Defense Department calls “lawful operational use” include OpenAI, Google, Microsoft, Amazon Web Services, Oracle, NVIDIA, and Reflection, with SpaceX also named in the official release.
Anthropic, the San Francisco-based AI lab behind the Claude model family and one of the most prominent safety-focused developers in the field, is not among them. Its exclusion traces directly to a White House directive earlier this year that triggered a legal battle over federal access, and the fallout has left the company locked out of the Pentagon’s classified AI buildout even after a federal court forced its reinstatement on civilian platforms.
How the freeze happened
On February 27, 2026, the General Services Administration confirmed it had acted “in support of President Trump’s directive” when it pulled Anthropic from two critical federal procurement channels: the government’s AI marketplace at USAi.gov and the Multiple Award Schedule. The removal was categorical. Anthropic’s products disappeared from the storefront agencies use to purchase approved commercial software, effectively cutting the company off from routine government sales overnight.
Anthropic challenged the action in court. A federal judge in the Northern District of California granted a preliminary injunction in Case No. 26-cv-01996-RFL, and on April 3, 2026, GSA announced it was withdrawing the removal and restoring Anthropic’s technology to the status quo that existed before the directive. On paper, Anthropic was back. Agencies could once again purchase its tools through standard civilian procurement channels.
But that legal victory did not extend to the Pentagon.
What the classified deals actually involve
The Defense Department’s own announcement, published on its official site, names eight firms that signed agreements to deploy AI on classified networks: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle. The Associated Press, reporting independently, described deals with seven tech companies and noted Anthropic’s absence. No direct link to the AP report was available for verification at the time of publication, so the seven-company figure should be treated as AP’s editorial characterization rather than a confirmed Pentagon number. The discrepancy may reflect how SpaceX is categorized; the company is primarily a launch and satellite provider rather than a traditional AI developer, which could account for the AP’s lower count.
The Pentagon frames these agreements as a way to bring advanced AI models directly into secure environments rather than relying on unclassified commercial cloud services. The language used, “lawful operational use,” signals something beyond research pilots or sandbox experiments. These tools are intended to support real-world military decision-making in classified settings, a category that could encompass intelligence analysis, logistics planning, operational forecasting, or integration with command-and-control systems.
No publicly available document specifies which models will run on which networks, what classification levels are involved, or what guardrails govern how frontier AI interacts with sensitive intelligence data. Dollar values for the agreements have not been disclosed, and no public comparison to prior Pentagon AI contracting efforts, such as the canceled JEDI cloud program or the ongoing work of the Chief Digital and Artificial Intelligence Office, has been offered by the Defense Department. That lack of detail is standard for classified programs but leaves significant questions unanswered about the scope, cost, and risk profile of these deployments.
Why Anthropic’s absence matters
Anthropic has built its reputation around AI safety research. The company developed Constitutional AI, a training method designed to make models more controllable and less prone to harmful outputs, and has published extensively on the risks of deploying powerful AI systems without adequate safeguards. Its Claude models compete directly with OpenAI’s GPT series and Google’s Gemini.
Excluding a lab with that safety pedigree from classified military deployments raises a pointed question: does the Pentagon’s approved vendor list reflect the best available thinking on how to deploy AI responsibly in high-stakes environments, or has a political dispute narrowed the field at exactly the wrong moment?
John Bansemer, a former senior official at Georgetown University’s Center for Security and Emerging Technology who has studied Pentagon AI adoption, has noted in prior public commentary that the Defense Department’s ability to attract a broad base of AI vendors is critical to maintaining competitive pressure on quality and safety. While Bansemer has not commented specifically on Anthropic’s exclusion from these classified deals, the general concern among defense technology analysts is that a smaller vendor pool reduces the Pentagon’s leverage and limits the diversity of safety approaches available for high-stakes deployments.
No public statement from Anthropic addresses its exclusion from the classified deals. Whether the company sought inclusion and was denied, declined to participate on principle, or was simply never invited remains unknown. The silence from both sides leaves a gap that matters, because the answer would reveal whether this is a policy choice, a procurement technicality, or a consequence of the broader legal fight that neither party wants to litigate in public.
Two tracks, one government
The practical result of the past several months is a split in federal AI procurement that did not exist before. Anthropic’s civilian access was restored by court order. Its path into classified defense networks remains closed. These two tracks operate under different legal authorities: civilian IT purchases flow through GSA schedules and standard acquisition rules, while classified defense programs fall under separate statutory frameworks with their own oversight regimes. A court victory on one track does not automatically open the other.
For the defense AI market, this creates a consolidation dynamic. The companies now inside the Pentagon’s classified networks have a structural advantage that compounds over time. Once a model is integrated into classified workflows, switching costs rise sharply. Security certifications, data handling protocols, and user familiarity all create friction that favors incumbents. Any company locked out during this initial wave faces the prospect of catching up to rivals who are already embedded.
That dynamic extends beyond Anthropic. Smaller AI firms and startups watching this process now know that federal civilian access and defense access are separate gates, and that political disputes with the White House can close the defense gate regardless of what courts say about the civilian one.
Anthropic’s legal win stops at the Pentagon’s classified door
The Pentagon is moving forward with frontier AI on classified systems while one of the field’s leading developers remains on the outside. The companies with active agreements are now positioned to shape how the U.S. military uses artificial intelligence in operational settings, from intelligence fusion to logistics to planning tools that could influence real decisions in real time.
Whether Anthropic eventually gains access depends on factors that no public document currently addresses: the trajectory of its legal dispute with the administration, any behind-the-scenes negotiations with the Defense Department, and whether the White House directive that started this chain of events is modified, rescinded, or hardened into permanent policy. Until those questions are answered, the company that has argued most loudly for building AI safely is absent from the place where the stakes for getting AI wrong are highest.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.