Morning Overview

OpenAI strikes deal to run AI on US Department of War classified networks

OpenAI has struck a deal to deploy its artificial intelligence models on the classified networks of the U.S. Department of War, the Pentagon’s new name under the Trump administration. The agreement, disclosed late Friday, positions the San Francisco company as a direct supplier to the military at a moment when a rival AI firm has just been frozen out of federal platforms. The timing and terms of the pact raise sharp questions about how quickly the defense establishment is absorbing commercial AI and what safeguards actually govern that access.

Pentagon Deal Follows Anthropic’s Federal Ouster

The contract arrived hours after the General Services Administration announced it had removed Anthropic from USAi.gov and the GSA Multiple Award Schedule, the government’s main procurement vehicle for cloud-based AI tools. USAi.gov functions as a federal secure, cloud-based AI evaluation suite, and losing access to it effectively bars Anthropic from routine government work. That action followed what multiple outlets described as a clash between Anthropic and the administration over national security AI policy, underscoring how political alignment on security doctrine is becoming a prerequisite for access to federal AI procurement channels.

Sam Altman negotiated the OpenAI agreement directly with senior Pentagon officials, according to The New York Times. The competitive backdrop is hard to ignore: several senior Anthropic executives previously worked at OpenAI, and the two companies have been locked in a contest for government contracts. With Anthropic sidelined, OpenAI faces less friction in winning defense business, a dynamic that compresses the normal vetting period for sensitive government technology partnerships and raises the stakes for how quickly guardrails are defined and enforced.

Classified Access Still Unresolved

Despite the headline framing, a gap exists between the deal’s ambitions and its current operational reach. The New York Times reported that OpenAI is not yet cleared for classified use, in part because its technologies are not available through Amazon’s cloud computing infrastructure, which handles much of the Pentagon’s classified hosting. That distinction matters: signing a contract is not the same as running models inside a secure compartmented information facility. The pathway from agreement to actual deployment on classified networks involves additional security certifications, hardened infrastructure, and integration with existing command-and-control systems that neither OpenAI nor the Department of War has publicly detailed.

OpenAI itself acknowledged the contract is formally with the Department of Defense, which the Trump administration has rebranded as the Department of War, in a statement first described by Reuters reporting on the deal. The renaming carries more than symbolic weight; it signals a shift in institutional identity that aligns with a more aggressive posture on autonomous systems and AI integration. For OpenAI, the naming convention creates a messaging challenge: the company must reconcile its public brand as a safety-focused AI lab with a contract partner that now explicitly identifies itself with warfighting, while convincing both policymakers and the public that its systems will not be repurposed for offensive or destabilizing applications.

Safeguards OpenAI Says Set It Apart

In a Saturday statement, OpenAI shared its rationale for the Pentagon partnership and argued that the deal contains more safeguards than typical defense contracts, including requirements for personnel with security clearances to work alongside government staff. The company framed the protections as exceeding industry standards for safety, presenting the arrangement as a model for how commercial AI can be integrated into national security settings without abandoning commitments to responsible use. Yet those claims so far rest on OpenAI’s own disclosures rather than independent audit or statutory oversight, leaving external observers to take the company largely at its word.

Among the technical controls, OpenAI described automated monitoring of model behavior within the deployed systems, and the company’s posted contract language cites Executive Order 12333, the longstanding directive that governs intelligence activities and the handling of private information. Invoking EO 12333 suggests the contract anticipates scenarios where AI models could interact with intelligence data streams, placing the technology squarely inside some of the most sensitive operations the government conducts. The question analysts and civil liberties groups will press is whether automated tracking and internal policies can meaningfully constrain AI tools once they are embedded in classified workflows, especially where errors or misuse could have consequences far beyond a single misclassification or hallucinated output.

Weapon Systems Directive Adds Context

The deal landed alongside a separate but related policy update. The Department of War announced a revision to Directive 3000.09 on autonomy in weapon systems, stating that the department’s approach to autonomy is “responsible and lawful.” The updated directive sets requirements for how autonomous and semi-autonomous systems must be designed, tested, and fielded, including provisions for human judgment in the use of lethal force and for rigorous evaluation before deployment. While the directive does not name OpenAI or any specific vendor, its timing creates a policy frame that could smooth the path for commercial AI tools to operate closer to lethal systems, provided they are wrapped in the language of responsibility and compliance.

Officials have emphasized that the directive is meant to reassure allies and the public that the United States will not field fully uncontrolled autonomous weapons, even as it accelerates research into AI-enabled targeting, logistics, and battlefield analysis. Critics note that phrases like “appropriate levels of human judgment” are open to interpretation, and they worry that the combination of powerful commercial models and permissive interpretations of the directive could erode meaningful human control in practice. In that light, the OpenAI contract is not merely a procurement decision but a test case for how much of the military’s autonomy agenda will be driven by private platforms whose primary accountability runs through corporate governance rather than public law.

Transparency, Oversight, and the Road Ahead

Beyond the specific terms of the OpenAI deal, the episode illustrates how quickly the center of gravity in military AI is shifting toward a handful of large vendors. General-purpose models developed for consumer and enterprise markets are being adapted, with relatively limited public debate, to roles that may include intelligence analysis, operational planning, and support for cyber operations. Reporting from global wire services has highlighted how defense and intelligence agencies worldwide are racing to secure access to such systems, often through opaque agreements that reveal little about technical safeguards or legal boundaries. That trend raises classic questions about democratic control over national security tools that are increasingly complex, proprietary, and difficult for non-specialists to scrutinize.

OpenAI has tried to distinguish its approach by stressing what it calls “layered protections” around the Pentagon arrangement, a phrase used repeatedly in detailed coverage of the pact. Those layers include access controls, logging, human review, and explicit carve-outs that the company says will keep its models from being used to develop biological weapons, conduct mass surveillance, or control fully autonomous lethal systems. Yet without statutory requirements for disclosure or independent technical evaluation, the durability of those assurances will depend on internal corporate decision-making and on how forcefully government customers insist on keeping the tools within agreed boundaries. As the Department of War proceeds with both its AI procurement push and its updated doctrine on autonomy, the OpenAI agreement may become an early benchmark for whether voluntary safeguards can keep pace with the strategic incentives driving militaries to adopt ever more capable machine intelligence.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.