The Department of Defense has formalized artificial intelligence agreements with a group of the country’s most powerful technology companies, granting them access to classified military networks that handle some of the nation’s most sensitive intelligence and operational data. Anthropic, the AI safety startup behind the Claude chatbot, is notably absent from the arrangement after a public clash with defense officials.
The agreements, announced by the Pentagon in late May 2026, name eight participating firms: SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, Amazon Web Services, and Oracle. The companies will deploy AI capabilities on classified networks for what the Defense Department described as “lawful operational use.”
Separately, the Associated Press reported that the military struck deals with seven companies and that Anthropic’s exclusion followed a public dispute and legal fight with the Pentagon. The AP’s count of seven versus the Pentagon’s list of eight has not been publicly reconciled; the discrepancy may reflect different definitions of what constitutes a finalized deal versus a broader framework agreement.
What the agreements actually involve
The Pentagon’s announcement specifies that commercial AI tools will operate on classified networks, a category that includes systems like SIPRNet (used for secret-level communications) and JWICS (the Joint Worldwide Intelligence Communications System, which handles top-secret and sensitive compartmented information). Placing commercial AI inside these environments marks a significant escalation from earlier pilot programs, which largely kept private-sector tools on unclassified or lower-security systems.
The Defense Department did not disclose the financial terms, contract durations, or specific military functions the AI tools will support. The phrase “lawful operational use” leaves open whether the technology will be applied to intelligence analysis, logistics planning, cybersecurity defense, targeting support, or all of the above. Pentagon officials have not publicly elaborated beyond the initial release.
For context, the Defense Department requested more than $1.8 billion for AI-related programs in its fiscal year 2025 budget, according to congressional budget documents. The new classified agreements suggest that spending trajectory is accelerating, with commercial providers now embedded at the highest security tiers.
Why Anthropic is on the outside
Anthropic has positioned itself as the AI industry’s most vocal advocate for cautious deployment. The company’s Responsible Scaling Policy, published in 2023, commits it to evaluating catastrophic risks before expanding the capabilities of its models. Anthropic CEO Dario Amodei has spoken publicly about the need for guardrails on military and government AI use, though the company has not issued a blanket refusal to work with defense agencies.
The precise nature of the dispute remains unclear. The AP described a public disagreement and legal fight, but no court filings or formal complaints have surfaced in public records. Whether Anthropic objected to specific contract terms, raised ethical red lines the Pentagon would not accommodate, or was excluded after negotiations broke down has not been confirmed by either side. Neither Anthropic nor the Defense Department has commented publicly on the specifics.
The result is that one of the three or four most capable frontier AI labs in the world now sits outside the Pentagon’s classified AI ecosystem, while its direct competitors occupy seats at the table.
How this fits the Pentagon’s AI trajectory
The classified agreements did not emerge from a vacuum. The Defense Department has been building toward deeper commercial AI integration for nearly a decade, with each step generating its own controversy:
- Project Maven (2017): The Pentagon’s first major push to use commercial AI for analyzing drone surveillance footage. Google withdrew from the program in 2018 after employee protests over the military application of its technology.
- JEDI cloud contract (2019): A $10 billion winner-take-all cloud computing contract awarded to Microsoft, then canceled in 2021 amid legal challenges from Amazon Web Services. It was replaced by the multi-vendor Joint Warfighting Cloud Capability (JWCC) program.
- Chief Digital and AI Office (2022): The Pentagon consolidated its AI efforts under a single office, the CDAO, signaling that AI was no longer an experimental side project but a core operational priority.
The new classified-network agreements represent the next logical step: moving commercial AI from cloud infrastructure and analytical tools on lower-security systems into the Pentagon’s most restricted environments, where the most consequential decisions are made.
What the companies gain and risk
For the seven (or eight) participating firms, the agreements open access to a defense AI market that is growing faster than almost any other government technology sector. They also lock those companies into security frameworks that impose strict limits on transparency, employee access, and public disclosure. Engineers working on classified programs typically need top-secret clearances, and the work itself cannot be discussed outside secured facilities.
That tradeoff has historically been a source of tension inside Silicon Valley. Google’s withdrawal from Project Maven demonstrated that defense contracts can trigger internal backlash. But the competitive landscape has shifted since 2018. OpenAI quietly dropped a clause in its usage policy that had prohibited military applications. Microsoft has long maintained its defense business through partnerships and acquisitions. Amazon Web Services built classified cloud regions specifically for intelligence agencies. The cultural resistance that once made Pentagon work controversial in tech circles has, for most of these companies, given way to commercial pragmatism.
Anthropic’s absence cuts in the opposite direction. By staying out, the company preserves its brand as a safety-first lab, but it also forfeits revenue and influence over how the military deploys AI. If the Pentagon proceeds without input from a company that has invested heavily in alignment research and risk evaluation, the safety considerations Anthropic champions may simply go unrepresented in the rooms where deployment decisions are made.
The questions Congress and the public have not answered
Several critical issues remain unaddressed. Congressional oversight of classified AI deployments is limited by the same secrecy that governs the networks themselves. Members of the armed services and intelligence committees receive briefings, but public hearings on the specifics of these agreements are unlikely given their classification level.
AI ethics researchers have raised concerns about deploying large language models and other AI systems in high-stakes military environments where errors can have lethal consequences. The Defense Department’s own AI ethics principles, adopted in 2020, call for AI systems that are “responsible, equitable, traceable, reliable, and governable.” Whether those principles translate into enforceable contract terms within the new agreements is unknown.
Allied nations are watching closely as well. The United Kingdom, Australia, and other Five Eyes partners have their own emerging AI defense strategies, and the Pentagon’s decision to formalize commercial AI access at the classified level could set a template that allies adopt or push back against.
What is clear as of June 2026 is that the Pentagon has made a structural bet: commercial AI belongs inside its most sensitive operations, and the companies willing to play by the military’s rules will be the ones shaping how that technology is used. Anthropic’s exclusion is not just a contract dispute. It is a signal about what the Defense Department values more in its AI partners: capability and compliance, or caution.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.