Somewhere inside the Environmental Protection Agency, an AI tool is scanning for environmental violations. At the Department of Energy, a platform called Genesis is crunching forecasts that shape national energy policy. And at the Federal Housing Finance Agency, regulators are quietly layering machine learning into mortgage oversight. These are three entries in a federal inventory that, by mid-2026, has ballooned to 3,611 active AI use cases spread across 56 agencies, up from roughly 2,136 the year before.
That 69% surge, documented through agency-published inventories now required under federal policy, marks the fastest single-year expansion of AI adoption in U.S. government history. But the growth has outpaced the infrastructure meant to keep it in check, raising pointed questions about oversight, risk, and whether the federal government truly understands what it has built.
Where the numbers come from
The government-wide total of 3,611 use cases is not drawn from a single centralized database. Instead, it is an aggregation of individual inventories that each of the 56 agencies publishes separately, in compliance with Office of Management and Budget directives and the Advancing American AI Act (originally S.1353 in the 117th Congress, signed into law as part of the 2023 National Defense Authorization Act). A January 2025 executive order, “Removing Barriers to American Leadership in Artificial Intelligence,” accelerated the push by directing agencies to catalog and expand their AI capabilities. That order replaced the Biden administration’s October 2023 executive order on AI safety, shifting the policy emphasis from risk mitigation and guardrails toward rapid deployment and reduced regulatory friction. Whether that pivot has accelerated adoption beyond what the earlier framework would have produced, or simply relabeled activity already underway, remains a subject of active debate among federal technology officials and outside analysts.
The General Services Administration offers one of the most granular windows into this effort. Its 2025 AI use case page includes downloadable CSV files that describe each tool’s purpose, deployment stage, topic area, and claimed operational impact. The Department of Energy publishes its own OMB-compliant inventory. The Department of Commerce breaks its FY25 inventory down by bureau. The EPA released a 2025 consolidated report through its AI Use Case Inventory program, and the Federal Housing Finance Agency provides year-by-year downloadable files with explicit release dates, making it possible to track growth at a single regulator over time.
These are primary sources, published by the agencies themselves, and they form the most reliable foundation for any count of federal AI deployment. Researchers, journalists, and oversight bodies arriving at the 3,611 figure do so by downloading each agency’s inventory and summing the entries, a method that is straightforward but dependent on each agency’s own definitions and reporting consistency.
What the auditors found
The Government Accountability Office has emerged as the leading independent check on this expansion. Its report GAO-25-107653, “Artificial Intelligence: Generative AI Use and Management at Federal Agencies,” examined inventories from a subset of agencies and documented year-over-year shifts in adoption. Notably, the GAO zeroed in on generative AI, the category of tools that includes large language models and content-generation systems. That focus reflects a recognition that generative AI carries distinct risks: hallucinated outputs, data privacy concerns, and the potential to automate decisions that previously required human judgment.
A Brookings Institution assessment of federal inventories from 2023 through 2025 added further texture. It found uneven adoption across agencies, with larger departments logging far more use cases and certain functional areas, particularly law enforcement and fraud detection, drawing disproportionate investment. That pattern suggests the headline number masks wide variation in how mature, consequential, or well-governed these tools actually are.
The gaps that matter
For all the transparency these inventories provide, they leave significant holes. No single federal source categorizes all 3,611 use cases by risk level. The GAO audited only a portion of agencies, so its findings about oversight shortfalls apply to a slice of the total, not the whole. Whether unaudited agencies face similar or worse governance problems remains an open question.
Definitions are another problem. The GSA distinguishes between “pre-deployment” and “deployed” stages, but agencies do not apply those labels uniformly. Some entries cataloged as AI use cases may be little more than rule-based automation scripts. Others involve sophisticated machine learning models that directly affect whether someone receives a government benefit, faces an enforcement action, or gets flagged by a federal screening system. The inventories do not consistently make that distinction clear.
Perhaps most critically, the public data includes almost no independent verification of results. Agencies describe what their tools do and where they operate, but evidence of cost savings, accuracy improvements, or error rates is largely absent. Without that information, it is impossible to judge whether a given AI deployment is delivering real value or simply occupying a line in a spreadsheet.
The published inventories and audit reports also contain no on-the-record statements from agency chief AI officers or other named officials describing deployment challenges, resource constraints, or lessons learned. That absence of direct, attributable commentary from the people managing these systems leaves a significant gap in public understanding of how the expansion is actually unfolding inside agencies.
Why the oversight race is still being lost
A 69% jump in cataloged AI tools is significant by any measure. But cataloging is not the same as governing. The federal government’s ability to manage this expansion responsibly depends on whether oversight keeps pace with inventory growth, and right now, the evidence suggests it has not.
The GAO has started closing that gap. Congress, agency inspectors general, and the public will need to follow. For anyone who wants to see how AI is entering the government services that touch their daily life, the most direct step is also the simplest: visit the agency inventory pages, download the spreadsheets, and look at what is actually listed. The data is there. The accountability infrastructure around it is still catching up.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.