Morning Overview

New power grid threat emerges as data centers unplug all at once

Dozens of data centers in Northern Virginia abruptly dropped off the power grid in two separate incidents over the past year, forcing grid operators into emergency mode, exposing a vulnerability that federal regulators are now racing to address. The episodes, which saw facilities switch en masse to backup generators after transmission faults, have drawn formal action from the Federal Energy Regulatory Commission and the U.S. Department of Energy. As artificial intelligence drives explosive growth in data center power consumption, the risk that these massive loads can vanish from the grid in seconds presents a distinct and growing threat to electricity reliability across the Mid-Atlantic.

Virginia Near-Misses Reveal a Hidden Grid Risk

The core danger is straightforward but poorly understood. When a high-voltage transmission line trips or malfunctions, data centers equipped with automatic transfer switches cut over to diesel backup generators almost instantly. That protects the servers. But from the grid’s perspective, hundreds of megawatts of demand disappear in a flash, creating a sudden surplus of power that can destabilize frequency and voltage across the network. In February 2025, roughly 40 facilities in Northern Virginia switched to backup power after a high-voltage line malfunction. Months earlier, a July 2024 event involved approximately 70 data centers in the same region, underscoring that the phenomenon is not a one-off anomaly but an emerging pattern tied to how these campuses are engineered.

Both incidents forced grid operators to take emergency action to rebalance supply and demand on the fly. Northern Virginia hosts the world’s densest concentration of data centers, and the sheer volume of load that can vanish simultaneously is unlike anything grid planners historically modeled for. Traditional large industrial customers, such as steel mills or chemical plants, do not disconnect in coordinated clusters the way data centers do when a shared transmission corridor faults. That clustering effect is what makes the problem qualitatively different from ordinary demand fluctuations, and it is amplified by projections that early last year a cluster of new campuses in Virginia was on track to consume a substantial share of the state’s power by the end of the decade, according to Wall Street Journal reporting.

FERC Orders PJM to Write New Rules

The federal response has been unusually rapid by regulatory standards. The Federal Energy Regulatory Commission, or FERC, opened a proceeding to examine reliability and cost issues associated with large, energy-intensive loads such as AI data centers that are sited alongside power plants within PJM Interconnection’s footprint. PJM is the nation’s largest regional transmission organization, coordinating electricity across multiple Mid-Atlantic and Midwestern states and the District of Columbia, and its market rules effectively set the template for how other regions may handle similar clusters of digital infrastructure. The FERC review is grounded in a technical conference record and stakeholder comments that warn of new operational stresses created when enormous computing facilities share substations and transmission paths with generation resources.

In a related directive filed under docket EL25-49-000, FERC instructed PJM to design clear standards for serving co-located load while maintaining reliability and protecting consumers from undue costs. The commission laid out specific deadlines for informational filings and compliance plans, signaling that it expects tangible reforms rather than open-ended study. For electricity customers, the stakes are direct: if sudden data center disconnections force PJM into emergency operations, the operator may need to call on expensive peaking plants or import power at premium prices, and those spikes ultimately filter down into retail bills. FERC’s orders amount to an acknowledgment that legacy interconnection procedures and operating criteria were not built for a grid where a single corridor can shed the equivalent of a mid-sized city’s power demand in under a second.

Forecasting Blind Spots Compound the Danger

Even before a transmission fault triggers mass disconnection, grid operators struggle to predict how much power data centers will actually draw at any given moment. PJM has flagged this challenge in formal correspondence on large-load forecasting, noting that rapid, uncertain growth in data center demand complicates both long-term planning and day-to-day operations. Traditional forecasting methods assume that big industrial loads ramp up gradually and follow relatively stable patterns tied to economic activity and weather. By contrast, AI training cycles and cloud-computing workloads can swing hundreds of megawatts within hours, and developers sometimes revise their build-out timelines faster than planners can update their models.

This forecasting gap matters because grid operators commit generation and reserves based on expected peaks and ramps. If actual demand is far lower or higher than predicted, or if it vanishes abruptly when automatic transfer switches send data centers to backup generation, the mismatch can trigger frequency deviations and voltage swings that stress equipment across the network. Reuters has reported that the amount of power consumed by data centers has roughly tripled over the past decade and could triple again as AI and cloud services expand. That trajectory suggests that today’s forecasting blind spots will grow more consequential, particularly in regions like PJM where many of the world’s largest campuses are already clustered on a relatively constrained transmission network.

Emergency Powers Already in Play

The Department of Energy has not waited for FERC’s rulemaking to run its course. Heading into a recent winter, the Energy Secretary used emergency authority under Section 202(c) of the Federal Power Act to authorize additional generation in PJM, citing concerns about tight reliability margins in the Mid-Atlantic. Section 202(c) is an extraordinary tool, historically reserved for wartime contingencies or acute crises. Its deployment underscores how narrow the region’s buffer has become. While the order addressed a suite of risks, including fuel constraints and weather-driven demand surges, it landed against the backdrop of the Virginia data center near-misses and the broader strain from rapidly growing digital loads.

Invoking emergency powers does not directly solve the problem of data centers unplugging in unison, but it buys time by ensuring more capacity is available when the grid is under stress. That stopgap approach, however, carries its own costs, since emergency generation can be more polluting and more expensive than ordinary dispatch. It also highlights the limits of relying on ad hoc measures instead of structural fixes: without new technical requirements for transfer switches, better visibility into backup generator behavior, and market rules that reflect the true volatility of data center demand, PJM and other operators may find themselves repeatedly turning to extraordinary authorities just to keep the lights on during periods of high digital activity or extreme weather.

Designing a More Resilient Digital Grid

Regulators and grid planners are now grappling with how to adapt infrastructure and rules so that digital growth strengthens rather than undermines reliability. One focus is on equipment standards: automatic transfer switches could be configured to delay or stagger the shift to backup power, reducing the instantaneous shock to the system when a transmission line trips. Another avenue is to require large campuses to ride through certain grid disturbances instead of disconnecting immediately, much as modern industrial facilities are expected to tolerate brief voltage sags without shutting down. These technical measures would need to be paired with robust telemetry so that operators can see, in real time, how much load is at risk of dropping off a given corridor.

Market design is just as important as hardware. If data centers are compensated for providing flexible demand—agreeing, for instance, to modulate non-time-critical computing tasks in response to grid conditions—they can act as a stabilizing resource rather than a source of sudden shocks. FERC’s directives to PJM on co-located load, along with the operator’s own efforts to refine forecasting and interconnection studies, point toward a future in which hyperscale campuses are treated more like integral components of the bulk power system and less like passive customers. The Virginia incidents have made clear that as AI and cloud services continue to expand, the line between digital infrastructure and critical grid infrastructure has effectively disappeared, and policy will have to catch up.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.