When Anthropic CEO Dario Amodei told the Pentagon his company “cannot in good conscience accede” to demands for unrestricted military use of its AI models, the Defense Department did not wait long to find a replacement. The Pentagon’s Chief Digital and Artificial Intelligence Office has selected Google Cloud’s Gemini for Government to power GenAI.mil, a new platform that will put generative AI tools in the hands of defense personnel across secure networks, according to an official announcement from Google Cloud.
The selection marks a turning point for military AI procurement. It follows a bitter and very public rupture between the Pentagon and Anthropic, one of the most prominent AI safety companies in the world, and it arrives alongside a separate $200 million contract ceiling awarded to OpenAI’s public sector arm. Taken together, the moves show a Defense Department racing to lock in commercial AI partners willing to operate on its terms, with real consequences for companies that refuse.
The Anthropic breakdown
The split between the Pentagon and Anthropic did not happen quietly. Defense Secretary Pete Hegseth warned Anthropic to let the military use the company’s AI technology as it sees fit, according to sources cited by the Associated Press. The Pentagon demanded what AP described as “unrestricted” use of Anthropic’s models under an existing contract, with contested provisions that reportedly included language that could enable mass surveillance of Americans and the development of fully autonomous weapons.
Amodei’s refusal was unambiguous. In a public statement, the Anthropic CEO said the company could not agree to the Pentagon’s terms, drawing a line that few major AI vendors have been willing to draw so explicitly.
The Pentagon responded by moving to designate Anthropic a supply chain risk, a label that would have effectively frozen the company out of defense procurement. A federal judge intervened with a preliminary injunction, temporarily blocking that designation, according to AP’s reporting on the legal proceedings. Third-party groups filed amicus briefs with factual declarations relevant to both procurement rules and AI safety claims. But the injunction is preliminary, not final. Anthropic now occupies an uncomfortable middle ground: not formally excluded from defense work, but not clearly welcome either.
Google steps in, with history in tow
Google’s selection for GenAI.mil carries a layer of irony that industry observers will not miss. In 2018, thousands of Google employees signed a letter protesting Project Maven, a Pentagon program that used Google AI to analyze drone surveillance footage. The backlash led Google to withdraw from the contract and publish a set of AI principles that explicitly excluded weapons applications. The company said at the time it would not design AI for use in systems intended to cause harm.
Seven years later, the landscape looks different. Google Cloud has built a substantial government and defense business, achieving the security accreditations necessary to handle sensitive national security data. The GenAI.mil deployment will meet Impact Level 5 requirements, a classification tier that governs data sovereignty and ensures information stays within approved government environments. IL5 means Google’s model will handle controlled unclassified information tied to national defense missions, not just routine administrative tasks.
What the announcement does not reveal is whether Google accepted the same kind of expansive usage terms that Anthropic rejected. Google’s press release includes compliance and sovereignty language but does not specify the operational boundaries governing how the military can apply Gemini’s capabilities. If Google agreed to broader terms than Anthropic was willing to tolerate, that represents a significant shift in how major AI companies negotiate ethical guardrails with the military. If Google secured narrower terms, the Anthropic dispute may have been less about industry norms and more about one company’s specific red lines. Neither scenario can be confirmed from available sources as of May 2026.
The OpenAI deal adds to the pattern
The Google selection did not happen in isolation. The Department of Defense also awarded OpenAI Public Sector LLC a $200 million Other Transaction Agreement, contract number HQ0883-25-9-0012, according to an official DoD contract notice. OTAs allow the Pentagon to bypass traditional competitive bidding and move faster when acquiring emerging technology. The $200 million figure represents a contract ceiling, not funds already spent, but it signals the scale of investment the military is prepared to commit.
Together, the Google and OpenAI deals establish a clear pattern: the Pentagon is using flexible procurement vehicles to bring frontier AI companies into the defense ecosystem at speed, and it is willing to put serious money behind that effort. For smaller AI firms, the barrier to entry is rising. Achieving IL5 compliance and the associated security accreditations demands substantial investment in infrastructure, legal support, and government relations. Companies without those resources may find themselves relegated to subcontractor roles, providing models or tools that run on platforms controlled by a handful of large vendors.
What remains unanswered
Several important questions remain open. No official Pentagon statement has explained why Google was chosen over other potential providers for GenAI.mil, or whether the Anthropic fallout directly accelerated the decision. The Google Cloud announcement provides technical specifications but includes no attributed quotes from DoD officials explaining the selection rationale.
The long-term effect of the court injunction on Anthropic’s eligibility for future defense contracts has not been established in public filings. Whether the injunction survives further legal proceedings, and whether it restores Anthropic’s standing in any meaningful way, are questions that could take months to resolve.
Perhaps most critically, it remains unclear how IL5 deployment will intersect with existing Pentagon policies on autonomous weapons and surveillance. IL5 status describes the sensitivity of the data environment, not the operational rules governing how models can be used. Without public documentation of approved use cases, auditing mechanisms, or safety constraints, there is no way to know whether GenAI.mil will be confined to decision support and administrative tasks or extended into areas that raise sharper ethical and legal concerns.
A new power dynamic between the Pentagon and Silicon Valley
For defense contractors, AI startups, and technology firms weighing government work, the practical message is hard to miss. The Pentagon is signaling that access to some of the largest AI contracts in existence may depend on a willingness to accept broad military use of commercial models. Companies that set firm limits on applications tied to surveillance or autonomous weapons face real procurement consequences, up to and including attempts to label them supply chain risks. Companies that accept expansive terms may gain market share but risk backlash from employees, civil society groups, and international customers wary of militarized AI.
The balance of leverage will likely shift over time. If frontier model providers conclude that defense contracts are strategically important but not existential to their business, they may continue to draw red lines around certain use cases, absorbing the lost revenue. If the largest firms come to see military work as central to long-term growth and influence, they may accept terms that shift ethical decision-making power away from corporate boards and toward national security agencies.
For now, the confirmed facts as of May 2026 point to a defense establishment moving quickly to embed generative AI into core operations, a leading AI safety firm that chose to walk away rather than relax its constraints, and a major cloud provider stepping into the gap. How that triangle evolves will shape not only the future of GenAI.mil but also the broader norms governing how advanced AI gets used in conflict and national security.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.