Morning Overview

Pentagon adds more Google Gemini models to GenAI.mil secure platform

The Department of Defense has expanded the roster of Google Gemini models available on GenAI.mil, the Pentagon’s secure generative AI platform that now serves more than one million users across the military, civilian workforce, and contractor community, according to a Bloomberg report published in May 2026. The update deepens Google Cloud’s role as the foundational provider on a platform the Pentagon has positioned as a default productivity tool for anyone with a Common Access Card on its nonclassified network.

The addition comes as the department races to turn GenAI.mil into a multi-vendor AI environment, with OpenAI’s ChatGPT and xAI’s Grok family of models also slated for integration. The speed of that buildout, from first login to a million users in roughly two months, has few precedents inside the federal government and is raising questions about whether security oversight can keep pace.

How GenAI.mil got here

GenAI.mil launched with Google Cloud’s Gemini for Government as its first frontier model, built to operate at Impact Level 5, a federal security classification that permits handling of sensitive but unclassified Defense Department data. The platform requires no extra software. Personnel authenticate with a Common Access Card on the department’s nonclassified network and access GenAI.mil from a standard desktop browser.

Defense Secretary Pete Hegseth announced the rollout directly to staff, describing GenAI.mil as live and available for military personnel, civilians, and contractors, according to a department message. (The links point to war.gov, the Defense Department’s rebranded web domain, which replaced defense.gov.)

Adoption moved fast. A subsequent DoD release confirmed the platform crossed one million unique users within two months and maintained 100 percent uptime from launch. That same announcement disclosed a partnership with OpenAI to bring ChatGPT to all department personnel through the same portal. Separately, the department revealed an agreement with xAI to integrate its Grok models, with initial deployment originally targeted for early 2026.

The Chief Digital and Artificial Intelligence Office, or CDAO, oversees the platform and has managed each vendor addition through formal accreditation at the IL5 security tier before granting access.

What the Gemini expansion means

Bloomberg reported that additional Gemini models have been loaded onto GenAI.mil, though the Pentagon has not issued a corresponding press release specifying which variants were added or when the upgrade took effect. That detail matters because Gemini models differ significantly in capability. A version with a longer context window, for example, could let an intelligence analyst feed an entire multi-hundred-page assessment into a single prompt rather than breaking it into fragments, a practical change that would reshape how large document sets are processed for logistics planning or threat analysis.

One likely reason Google’s expansion came before the other vendors reached full deployment: the company already held IL5 authorization for GenAI.mil’s infrastructure. Adding new models from an already-accredited provider is a faster process than onboarding an entirely new vendor, which requires separate security reviews, red-teaming, and data-handling agreements. Neither the Pentagon nor Google Cloud has published performance benchmarks or security audit results for the expanded model set.

Where the other vendors stand

The OpenAI partnership was announced alongside the one-million-user milestone, but the department did not specify an exact activation date for ChatGPT access within GenAI.mil. As of May 2026, no public update has confirmed whether ChatGPT is live for all users or still working through accreditation.

The xAI timeline presents a similar gap. The original agreement targeted early 2026 for Grok’s initial deployment, a window that has now arrived or passed. No official follow-up has confirmed whether Grok is operational on the platform, in a limited pilot phase, or delayed by technical or policy hurdles. The silence is notable given the department’s willingness to publicize earlier milestones.

For personnel planning workflows around a specific model, the safest step is to check availability directly on the platform after logging in. Official announcements have not always aligned with quiet configuration changes made inside the firewall.

The security question no one has answered publicly

The Pentagon’s 100 percent uptime figure is a reliability metric, not a security or accuracy measure, and it comes from the department itself. No third-party audit or inspector general review has validated the number in any public document. The “unique users” count also lacks a published definition: it is unclear whether the figure tracks unique Common Access Card holders, unique logins, or some other measure, a distinction that matters when judging whether adoption is broad or concentrated among a smaller group of frequent users.

Each new model added to GenAI.mil multiplies the attack surface that must be monitored. Different large language models carry different hallucination rates, different training-data lineages, and different vulnerabilities to prompt injection. The department has not disclosed whether it imposes uniform red-teaming requirements across all vendors or whether each provider negotiates its own guardrails. Nor has it explained how it plans to isolate or retire a model quickly if a security flaw surfaces after deployment.

The broader pattern is a Pentagon moving faster on generative AI adoption than nearly any other federal agency, betting that a multi-model environment will give commanders and analysts the flexibility to match tools to missions. That bet carries real upside: competition among vendors can drive better performance and lower costs. But it also means the CDAO must govern an expanding ecosystem of models from rival companies, each updating on its own release cycle, inside a security perimeter that was originally built around a single provider. Whether the department’s oversight, testing, and transparency frameworks can scale as quickly as the technology they are supposed to govern is the question that will define GenAI.mil’s next phase.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.