Sam Altman, the CEO of OpenAI, faced pointed questions from U.S. lawmakers during a closed-door meeting in Washington as scrutiny intensifies over the artificial intelligence company’s growing role in military technology. The session centered on a $200,000,000 prototype contract awarded to OpenAI’s government-facing subsidiary by the Department of Defense, a deal that has drawn criticism from employees, ethicists, and members of Congress alike. The confrontation signals that the political debate over AI in warfare has shifted from theoretical to deeply personal for the company that once pledged to build technology “for the benefit of all of humanity.”
The $200 Million Pentagon Deal
The contract at the center of the controversy was awarded to OpenAI Public Sector LLC under a prototype Other Transaction agreement, according to the Defense Department notice. Designated contract number HQ0883-25-9-0012, the award carries a ceiling value of $200,000,000 and tasks OpenAI with developing “prototype frontier AI capabilities” for “critical national security challenges” in both “warfighting and enterprise domains.”
That language has become a flashpoint. Reporting by the Guardian emphasized the “warfighting” aspect of the deal, while the Pentagon’s own posting uses a broader framing that includes enterprise applications alongside combat-related work. The distinction matters: critics argue the focus on warfighting reveals the true intent, while defenders point to the enterprise component as evidence that the tools will also serve logistics, intelligence analysis, and administrative functions. Neither OpenAI nor the Pentagon has released a detailed breakdown of how the $200,000,000 will be allocated across those categories, leaving both sides to argue over a contract that remains largely opaque to the public.
Altman’s Defense and Internal Backlash
The Washington meeting was not the first time Altman has had to justify the company’s defense pivot. Earlier this year, he defended OpenAI’s decision to allow the Pentagon to use its tools for classified work during a staff address, calling the internal backlash “really painful” for him. That description underscored that the resistance inside OpenAI is not a fringe complaint but a significant internal rift that the CEO felt compelled to confront directly.
Two days after that staff address, Altman struck a more contrite tone. In a memo to employees, he said he regretted moving so quickly to secure the deal, acknowledging that the push into defense work looked “opportunistic” inside the company. That kind of admission is unusual for a tech executive in the middle of a major government contract. It also reflects a delicate balancing act: Altman is trying to reassure employees who worry about building tools for warfare, while signaling to the Pentagon and Congress that OpenAI remains a reliable partner for sensitive national security work.
Altman’s preferred framing, laid out in the same period, is that elected officials rather than OpenAI should decide how the military uses AI. In interviews and discussions, he has argued that democratic institutions should set the red lines for battlefield autonomy, rules of engagement, and acceptable targets. The message is that OpenAI will build powerful systems but defer to government on how far those systems should go in combat. For employees who joined the company to work on broadly beneficial applications of AI, that division of responsibility has not fully resolved the underlying question of whether OpenAI should be in the business of war at all.
From Counter-Drones to Classified Work
The Pentagon contract did not emerge in a vacuum. OpenAI had already signaled its willingness to work with the defense sector through a partnership with Anduril Industries, the defense technology firm founded by Palmer Luckey. That collaboration, announced in late 2024, focused on counter-drone capabilities for military applications, using OpenAI’s models to help operators detect and respond to aerial threats. Both companies declined to share financial details of the arrangement at the time.
The Anduril partnership was significant because it represented OpenAI’s first overt step into battlefield technology. Counter-drone systems have become a priority in defense procurement, driven by the proliferation of cheap unmanned aerial vehicles in conflicts from Ukraine to the Middle East. Yet the jump from a narrowly scoped collaboration with a single contractor to a $200,000,000 prototype agreement covering “warfighting” applications marks a qualitative escalation. OpenAI moved from a supporting role in one niche capability to a direct, large-scale relationship with the Department of Defense, with a mandate to explore how its frontier models might be integrated into a broad range of military operations.
Within the company, that trajectory has fueled fears of mission creep. Employees who tolerated limited work on defensive systems, such as drone detection or base protection, now see OpenAI’s tools being pulled deeper into the planning and execution of combat operations. The lack of clear public guardrails around those uses has only intensified the anxiety.
What Lawmakers Want to Know
The questions Altman faced in Washington reflect a broader concern that existing oversight mechanisms are not equipped to handle the speed at which AI companies are entering the defense space. Traditional contractors operate under decades of procurement rules, security protocols, and congressional review processes. OpenAI, by contrast, received its award through a prototype Other Transaction agreement, a contracting vehicle designed to move faster than standard procurement and to attract nontraditional defense suppliers.
That speed is the point, but it is also the problem. OT agreements carry fewer reporting requirements and less congressional visibility than conventional contracts. Lawmakers pressing Altman in a closed-door session, rather than a public hearing, suggests the conversation may have touched on classified or commercially sensitive details that cannot be discussed openly. Yet the absence of a public record means voters and taxpayers have limited insight into what commitments were made, what safeguards were promised, or how success will be measured.
The political dynamics cut across party lines. Some members of Congress see AI-enabled defense tools as essential to maintaining a military edge over China and Russia, arguing that adversaries will not wait for the United States to resolve its ethical debates. Others worry that rushing advanced AI into combat applications without clear guardrails could produce catastrophic errors, from friendly-fire incidents to miscalculated escalations in tense theaters. Altman’s stated position that elected officials should set the rules aligns rhetorically with the second camp but conflicts with the pace at which OpenAI has pursued defense revenue in practice.
The Gap Between Words and Contracts
The most striking tension in this story is the gap between Altman’s public rhetoric and OpenAI’s contractual commitments. Calling the deal “opportunistic” in an internal memo while simultaneously defending it to lawmakers creates a credibility problem on both fronts. Employees hear a leader who appears to recognize their discomfort yet continues to expand the company’s military footprint. Lawmakers see an executive who urges them to take responsibility for hard choices even as his company races ahead under a fast-track contracting mechanism.
That dissonance is amplified by OpenAI’s origins as a nonprofit research lab that vowed to prioritize safety and broad benefit over commercial gain. The creation of OpenAI Public Sector LLC, the embrace of classified work, and the pursuit of large Pentagon contracts all mark a sharp departure from that founding narrative. For critics, the $200,000,000 prototype agreement is not merely another revenue stream but evidence that OpenAI has become a conventional defense contractor in all but name.
Supporters of the deal counter that refusing to engage with the Pentagon would not stop militaries from adopting AI; it would simply cede the field to less cautious actors. From that perspective, OpenAI’s involvement is a way to embed safety practices, testing regimes, and human oversight into systems that might otherwise be built with fewer constraints. The unresolved question is whether those safeguards will be strong enough, and transparent enough, to satisfy both the engineers building the models and the public whose security is at stake.
As Congress weighs how to regulate AI in national security, Altman’s closed-door appearance is unlikely to be the last. Lawmakers are already exploring new reporting requirements for Other Transaction agreements, tighter rules around autonomous weapons, and clearer lines of accountability for companies that build dual-use AI systems. For OpenAI, the outcome of that debate will determine whether its Pentagon work is seen as a necessary evolution in a dangerous world or as a betrayal of the ideals that once set it apart from the rest of Silicon Valley.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.