Morning Overview

Pentagon’s classified AI network now powered by SpaceX, OpenAI, Google, Nvidia, Microsoft, Amazon, and Reflection

The Pentagon has signed agreements with at least seven major technology companies to deploy artificial intelligence directly onto its classified military networks, formally binding some of the world’s most valuable private firms into the backbone of U.S. national defense. The confirmed partners are SpaceX, OpenAI, Google, Nvidia, Microsoft, Amazon Web Services, and Reflection. Oracle may be an eighth, though its status is disputed between official and independent accounts.

The deals, disclosed in May 2025 by the Department of Defense under its recently adopted “War Department” branding, represent the most concrete step yet in the military’s push to embed commercial frontier AI inside systems that handle the nation’s most sensitive secrets. The rebranding itself, announced in early 2025, is politically contentious: the department had not used the “War Department” name since 1947, and the change has drawn both support and criticism from lawmakers and defense analysts. The agreements also mark a sharp line between the companies invited in and the one conspicuously left out: Anthropic, maker of the Claude family of AI models and one of the field’s most prominent safety-focused labs.

Who is in, who is out, and why it matters

The Defense Department’s official release lists eight companies by name: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, Amazon Web Services, and Oracle. The Associated Press, reporting independently, confirmed seven of those but did not include Oracle, creating a discrepancy neither source has resolved as of June 2026. Oracle already operates classified cloud infrastructure for the federal government through its OCI Government Cloud, so its inclusion would not be surprising, but the gap between the two accounts remains unexplained.

The seven confirmed names read like a roster of the companies that already dominate Pentagon technology contracts. SpaceX provides satellite communications through Starlink and launches classified payloads for the Space Force and intelligence agencies. Microsoft and AWS are the primary contractors on the Joint Warfighting Cloud Capability (JWCC), the military’s multi-cloud computing backbone. Nvidia designs the GPU chips that power virtually every major AI training run in the world. Google, despite pulling out of Project Maven in 2018 after employee protests over drone-targeting AI, has since rebuilt its defense business and holds cloud contracts across multiple agencies.

OpenAI’s presence is particularly striking. The company was founded in 2015 as a nonprofit research lab with a stated mission to ensure artificial general intelligence benefits all of humanity. Over the past two years, it has restructured toward a for-profit model, dropped a blanket prohibition on military use of its technology, and now sits inside the Pentagon’s classified perimeter. That trajectory from idealistic startup to classified defense partner has unfolded faster than almost anyone in the AI industry predicted.

Reflection, the least well-known name on the list, is a newer entrant in the frontier AI space. Its inclusion alongside trillion-dollar incumbents signals that the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), the entity most likely managing these agreements, is casting a wider net than just the obvious giants.

Anthropic’s absence and the unconfirmed dispute behind it

The AP reported that Anthropic was excluded from the group due to what it described as an ongoing dispute or litigation, citing unnamed officials familiar with the matter. The specific nature of that conflict, whether it involves intellectual property, contract terms, data-handling requirements, or something else entirely, has not been publicly detailed. The claim remains unconfirmed by either Anthropic or the Defense Department, and no court filings or formal complaints have surfaced in public records to corroborate it.

The exclusion, whatever its cause, carries strategic weight. Anthropic has positioned itself as the AI industry’s most vocal advocate for safety-first development, publishing extensive research on AI alignment and building what it calls “constitutional AI” guardrails into its models. Being shut out of the Pentagon’s classified AI push suggests that safety credentials alone do not guarantee access to what could become one of the largest and most consequential AI customers on the planet. It also raises a harder question: whether the Defense Department views Anthropic’s safety-first posture as a feature or a friction point when speed of deployment is the priority.

For Anthropic’s competitors, the lesson is blunt. Legal or contractual disputes with the federal government can translate directly into lost strategic ground, and that ground may be difficult to recover once classified systems are built around a rival’s models.

What the agreements actually cover

Almost nothing about the operational substance of these deals is public. Neither the Defense Department release nor the AP report describes which AI models will be deployed, what tasks they will perform, or what security and oversight protocols govern their use on classified networks.

That gap matters enormously. “Classified networks” in the Pentagon context typically refers to systems like SIPRNet (Secret-level) and JWICS (Top Secret/SCI), which carry intelligence assessments, operational plans, and communications that foreign adversaries actively target. Deploying commercial AI on those networks raises questions that the public record does not yet answer: How will the models be isolated from their commercial counterparts? Who audits their outputs? What happens if a model hallucinates or produces flawed analysis that feeds into a real-world military decision?

The potential applications range widely. AI on classified systems could support intelligence analysis by sifting through satellite imagery or intercepted communications far faster than human analysts. It could optimize logistics, from ammunition supply chains to troop deployment schedules. It could assist in cyber defense, identifying intrusion patterns across military networks. Or it could support more sensitive functions that the Pentagon has no incentive to discuss publicly.

None of the seven confirmed companies had issued public statements about the agreements as of early June 2026. That silence is notable for firms like Google and Microsoft, which have previously published AI ethics principles and faced pressure from employees and shareholders to be transparent about military work.

Concentration risk and supply-chain fragility

The agreements accelerate a pattern that has been building for years: the U.S. military’s core technological capabilities are increasingly inseparable from a handful of private companies. The same firms that run consumer search engines, cloud storage, and chatbots now also underpin classified defense infrastructure. That consolidation creates real efficiencies. These companies have engineering talent, computing resources, and AI research capacity that the Pentagon cannot replicate internally.

But it also creates fragility. If a single provider suffers a security breach, a corporate restructuring, or a shift in business priorities, classified systems built around its AI could be disrupted in ways that are hard to quickly replace. The public record offers no visibility into how the Pentagon plans to manage that supply-chain risk or prevent over-reliance on any one vendor.

There is also the question of oversight. Congressional committees with jurisdiction over defense and intelligence spending have not yet held public hearings on these specific agreements. Independent watchdogs, including the Government Accountability Office, have repeatedly flagged the Pentagon’s AI adoption as an area where governance has lagged behind deployment. Whether these classified deals include the kind of audit trails, testing requirements, and accountability mechanisms that outside experts have called for remains unknown.

A classified AI infrastructure built largely out of public view

The immediate facts are narrow but significant: the Pentagon has decided that frontier AI from commercial labs belongs inside its most sensitive networks, and it has formalized that decision with a defined group of partners. The competitive field for classified defense AI is now set, at least for this phase, and companies outside it face a materially different strategic position.

The deeper questions, about implementation, safeguards, cost, and accountability, remain entirely out of public view. Answering them will require future disclosures from the Pentagon, reporting from journalists with access to classified briefings, or oversight actions from Congress and inspectors general. Until then, the most consequential AI deployment in U.S. government history is proceeding largely in the dark.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.