Morning Overview

Rapid AI gains intensify the push to secure critical systems

In February 2024, a water treatment facility in rural Texas lost control of its systems after a cyberattack linked to a Russian hacktivist group. The breach was rudimentary, not AI-driven, but it exposed a vulnerability that security officials say artificial intelligence could soon exploit at scale: thousands of utilities, power plants, and transit networks running on aging digital controls with little defense against rapidly evolving threats.

That scenario is now driving the most concentrated federal effort yet to define how AI should be built, deployed, and monitored inside the infrastructure Americans depend on every day. Since late 2023, at least five federal agencies and one major international partner have issued guidelines, frameworks, and risk assessments targeting AI in critical systems. But as of May 2026, enforcement remains largely voluntary, sector coverage is uneven, and the policy landscape has shifted dramatically under a new administration.

A burst of federal action, then a policy reversal

The push began with Executive Order 14110, signed by President Biden on October 30, 2023, which directed federal departments to assess AI risks across critical infrastructure and produce actionable guidance. Within weeks, the Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Cyber Security Centre released joint guidelines for secure AI system development, the first formal transatlantic framework focused specifically on making AI products resistant to attack from the design stage.

The National Security Agency co-authored that guidance, a detail that underscored how military and signals-intelligence agencies view poorly secured AI components as a direct operational risk. Compromised machine-learning models or poisoned data pipelines reaching infrastructure networks could be weaponized to disrupt services or siphon sensitive operational data.

On November 14, 2024, the Department of Homeland Security followed with a sector-focused framework for safe AI deployment in critical infrastructure, developed with its AI Safety and Security Board. Covering energy, transportation, and communications, the framework translated the executive order’s broad mandates into concrete expectations around governance, testing, monitoring, and incident response.

Then the ground shifted. On January 20, 2025, President Trump signed Executive Order 14148, revoking Biden’s AI order and signaling a preference for lighter regulatory oversight of the technology. The revocation did not withdraw the CISA-NCSC guidelines or the DHS framework, which remain publicly available, but it removed the top-level directive that had compelled agencies to produce them. Federal AI-security work has continued in pockets, but without the executive mandate that originally unified it.

What the frameworks actually say

The DHS framework remains the most comprehensive single document. It lays out expectations for operators and vendors across multiple sectors: establish AI governance structures, conduct adversarial testing before deployment, maintain continuous monitoring of AI systems in operation, and build incident-response plans that account for AI-specific failure modes such as model drift and data poisoning.

The Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) produced a separate risk assessment focused on AI in energy infrastructure. Released in 2024 as part of the agency’s response to Executive Order 14110, it catalogs both benefits, like grid optimization and predictive maintenance, and threats, including adversarial manipulation of machine-learning models embedded in energy control systems. It is the most detailed sector-specific government evaluation publicly available and makes a point that cuts both ways: AI can improve grid resilience while simultaneously expanding the attack surface.

CISA also secured voluntary Secure by Design commitments from major technology providers, including Microsoft, Google, and Amazon Web Services. The pledges ask software manufacturers to embed security principles into product development rather than treating them as aftermarket patches. They carry no statutory penalty for noncompliance, but they create a public record that procurement officers and regulators can reference when evaluating vendors.

Gaps that remain wide open

A Government Accountability Office report found that DHS needs to strengthen its AI risk-assessment guidance for critical infrastructure sectors. The GAO, which operates as a nonpartisan congressional watchdog with audit authority over executive agencies, examined how DHS and sector risk management agencies were handling requirements from the original executive order. Its findings pointed to uneven implementation: some sectors had received tailored guidance while others, including healthcare and financial services, had not, even as AI adoption in those fields accelerated.

Adoption by operators is the largest unknown. No public evidence confirms that utilities, transit authorities, or water systems have formally integrated the CISA-NCSC guidelines or the DHS framework into their internal security programs. The agencies have described what guidance exists and what principles it endorses, but whether the documents are shaping day-to-day engineering decisions at the companies running power plants and treatment facilities has not been independently verified. Without disclosure requirements or standardized reporting, the gap between published guidance and operational practice is impossible to measure.

The voluntary nature of the Secure by Design pledges compounds the problem. Without enforcement mechanisms tied to federal contracts, grants, or licensing, the commitments function as statements of intent. No oversight body has assessed whether they have produced measurable changes in how software reaching critical systems is engineered, particularly for components buried deep inside operational technology or supplied through complex vendor chains.

Sector coverage is also lopsided. Energy has a dedicated DOE assessment. The DHS framework spans multiple sectors at a high level. But comparable primary evaluations for healthcare, water, and financial infrastructure have not surfaced publicly. The GAO’s call for better coordination suggests the current patchwork could leave some sectors with far less rigorous AI risk evaluation than others, even as they deploy similar tools for diagnostics, automated decision-making, and customer service.

Where the pressure is building

Despite the executive order’s revocation, several forces are keeping AI infrastructure security on the agenda. Congressional interest has not faded; bipartisan bills targeting AI transparency and critical-infrastructure cybersecurity have been introduced in both chambers. State regulators, particularly in energy and water, are beginning to ask vendors about AI security practices during procurement reviews. And the international dimension persists: the CISA-NCSC guidelines remain a reference point for allied governments developing their own standards, creating market pressure on multinational vendors to meet them regardless of U.S. federal enforcement.

For organizations that operate or supply technology to critical infrastructure, the practical step is straightforward: review the DHS framework and the CISA-NCSC guidelines against current AI procurement and deployment practices. Both documents are publicly available and written to be actionable, emphasizing secure design, robust testing, and continuous monitoring. Companies that align early position themselves favorably if voluntary commitments eventually harden into procurement requirements or regulatory conditions, a trajectory the GAO report implicitly supports.

The broader trajectory is unmistakable. Federal agencies moved from broad policy directives to sector-specific technical guidance and began coordinating with international partners and independent watchdogs. Then the executive mandate driving that work was pulled. What remains is a collection of frameworks without a forcing function: detailed, technically sound, and largely untested in the field. The next chapter depends on whether Congress, state regulators, or market pressure can close the distance between blueprint and practice before the next attack on a water plant, a power grid, or a transit network involves an adversary that has learned to use AI faster than the defenders have.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.