During a House hearing on federal AI adoption, members of Congress heard testimony that should unsettle anyone responsible for defending government networks: artificial intelligence is learning to find software flaws faster than humans can fix them. The hearing, titled “How are Federal Agencies Harnessing Artificial Intelligence?” and held during the 118th Congress (2023-2024 session), produced witness statements warning that AI-powered tools could surface zero-day vulnerabilities at a pace that would swamp the security teams tasked with patching them.
That warning is no longer purely theoretical. In early 2025, Google publicly disclosed disrupting attackers who used AI to exploit a previously unknown software weakness, and peer-reviewed research from the National Institute of Standards and Technology has documented how machine learning is reshaping the zero-day landscape on both offense and defense. Taken together, these developments point toward a future in which the traditional patch cycle, often measured in weeks or months, may be dangerously out of step with the speed of AI-assisted exploitation.
What the congressional record shows
The official hearing record for Event 116358 includes the full transcript and witness testimony from the 118th Congress (2023-2024). Federal officials and outside experts told lawmakers that AI tools could accelerate the identification of previously unknown software weaknesses, creating a volume of zero-day disclosures that current security operations centers are neither staffed nor funded to absorb. Witnesses described a scenario in which the same large language models and automated reasoning systems used to write code could also be turned loose on codebases to find exploitable bugs, potentially at industrial scale.
The hearing did not produce a single quantitative forecast for how many new zero-days AI might generate per year. Testimony remained qualitative, framing the risk in terms of direction and urgency rather than precise numbers. That absence matters: without modeled projections, budget planners and agency CISOs are left to act on instinct rather than data.
Technical research backs the concern
A literature review published by NIST’s Information Technology Laboratory in January 2023, titled “A Review of Machine Learning-based Zero-day Attack Detection: Challenges and Future Directions,” surveyed how ML models are changing the way both attackers and defenders interact with unknown vulnerabilities. The paper found that even AI-enhanced detection systems struggle with real-time adaptation, meaning defensive tools can lag behind offensive ones built on the same underlying technology.
That asymmetry is the core of the problem lawmakers flagged. If offense scales faster than defense when both sides adopt machine learning, the advantage tilts toward attackers, at least in the near term. The NIST review focused on detection (identifying zero-day attacks after they happen) rather than discovery (finding new flaws before anyone else does). No companion study has yet measured AI’s effect on vulnerability discovery rates with comparable rigor, leaving a significant gap in the public research base.
NIST’s broader NICE workforce framework adds another dimension. The cybersecurity talent pipeline is not growing fast enough to meet existing demand, let alone the surge in workload that AI-generated vulnerability disclosures could create. Agencies already competing for scarce analysts would face even steeper hiring challenges.
Google’s disclosed disruption of AI-assisted exploitation
In early 2025, Google disclosed that it disrupted hackers who used AI to exploit an unknown flaw in a company’s digital defenses. The Associated Press reported the incident with on-the-record attribution to leadership at Google’s Threat Intelligence Group, who characterized the case as a significant marker in the evolution of AI-assisted cyberattacks.
Key details remain limited. Google has not fully disclosed the specific vulnerability, the identity of the targeted organization, or the precise role AI played in the exploit chain. It is unclear whether the AI component compressed the timeline from discovery to working exploit or simply automated steps a skilled human attacker would have performed manually. That distinction matters for gauging how much AI truly changes the economics of zero-day development versus how much it streamlines existing techniques.
A single disclosed incident does not establish a trend. But it does confirm that AI-assisted zero-day exploitation has moved out of controlled research environments and into live threat campaigns. Given that companies and governments routinely delay or suppress breach disclosures, the Google case may represent the visible tip of a larger pattern.
Gaps in the federal response
As of mid-2025, the Cybersecurity and Infrastructure Security Agency (CISA) had not issued formal public guidance specifically addressing AI-driven zero-day proliferation. CISA maintains its Known Exploited Vulnerabilities (KEV) catalog, which compels federal agencies to patch actively exploited flaws within set deadlines. But the KEV process is reactive: it responds to vulnerabilities after exploitation is confirmed. If AI tools begin surfacing exploitable flaws faster than the catalog can be updated, the system’s latency becomes a liability.
As of mid-2025, no formal guidance from the Office of Management and Budget or CISA had set expectations for how agencies should adjust patch timelines, adopt defensive AI tools, or report on their preparedness for accelerated exploit cycles. Likewise, no publicly available federal projection had quantified how many additional zero-day vulnerabilities AI tools might surface per year compared to traditional manual research. The gap between the congressional warning and any institutional policy response remains an open variable.
Meanwhile, DARPA’s AI Cyber Challenge (AIxCC), a competition designed to spur development of AI systems that can autonomously find and fix software vulnerabilities, has demonstrated that the technology to automate both sides of the equation is maturing rapidly. Results from AIxCC underscore that the offensive potential lawmakers worry about has a defensive counterpart, but only if agencies invest in deploying it.
What federal security teams should pressure-test now
For security practitioners inside federal agencies, the convergence of congressional testimony, NIST research, and the Google incident all point in the same direction, even if they do not yet converge on a precise timeline or scale. Three pressure points deserve immediate attention.
Volume. If AI tools increase the raw number of exploitable flaws discovered and weaponized, agencies will need to triage more aggressively. That means focusing scarce human attention on systems supporting critical missions or holding sensitive data. AI itself could assist in this prioritization, ranking vulnerabilities by exploitability, potential impact, and network exposure, but only if agencies have access to trustworthy models and the infrastructure to run them securely.
Speed. Many agencies still batch patches into scheduled maintenance windows to minimize operational disruption. In a world of machine-speed exploitation, those windows may be fatally slow. Security leaders will need to revisit risk trade-offs, potentially accepting more frequent but smaller disruptions in exchange for faster mitigation of critical flaws. An honest internal audit of mean time to patch for critical vulnerabilities is the necessary starting point.
Complexity. AI may help attackers chain multiple subtle weaknesses together, producing exploits that are harder to detect with signature-based tools. That pushes agencies toward behavioral analytics and anomaly detection, systems that flag suspicious activity even when the specific exploit is unknown. This shift aligns with the direction of the NIST research but requires sustained investment in both technology and the specialized staff to operate it.
Signals that will shape federal patch-cycle strategy
Several near-term developments will clarify whether the warnings from Capitol Hill harden into a sustained crisis. Watch for additional public disclosures of AI-assisted zero-day exploitation, whether through law enforcement actions, vendor transparency reports, or incident response firms. Watch for federal bodies publishing quantitative assessments of AI’s impact on vulnerability discovery, moving the conversation from qualitative alarm to modeled scenarios with numbers attached.
Policy signals matter too. If CISA or OMB issues guidance that explicitly addresses AI-driven zero-day proliferation, it will mark a shift from awareness to institutional action. Such documents could mandate tighter patch deadlines, require agencies to deploy specific defensive AI capabilities, or establish reporting requirements for preparedness against accelerated exploit cycles.
The evidence base is still developing, and the exact scale and timing of the threat remain uncertain. But the trajectory is clear enough to act on. Federal cybersecurity teams that begin tightening patch workflows, experimenting with defensive AI, and investing in specialized skills between now and mid-2026 will be better positioned than those that wait for the next congressional hearing to confirm what this one already warned about.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.