Morning Overview

AI data centers now consume 29.6 gigawatts — equal to New York state’s peak load — and demand is doubling every 18 months

Somewhere in central Virginia, a utility crew is stringing new high-voltage transmission lines to feed a cluster of data centers that did not exist three years ago. Across the country in central Oregon, a small city is debating whether to let another hyperscale campus tap into a grid that already strains on hot summer afternoons. These are not isolated stories. They are symptoms of a single, accelerating trend: the artificial intelligence industry now commands roughly 29.6 gigawatts of power capacity worldwide, according to Stanford’s 2026 AI Index Report, which draws on chip-shipment modeling by the research group Epoch AI.

To put that in perspective, 29.6 gigawatts is nearly equal to the all-time peak electricity demand ever recorded across New York state, a grid that serves roughly 20 million people. That record, 33,955 megawatts, was set during a brutal heat wave in the summer of 2013 and has been used as a planning benchmark by the New York Department of Public Service and the state’s independent system operator ever since. The fact that a single sector of the digital economy now rivals that figure signals a shift that grid planners, ratepayers, and policymakers can no longer treat as a future problem.

Where the 29.6-gigawatt number comes from

Epoch AI arrived at the estimate by tracking global shipments of AI-specialized chips, primarily GPUs and custom accelerators from Nvidia, AMD, and in-house designs at companies like Google, then mapping those shipments to known power consumption curves. The approach is bottom-up: rather than relying on self-reported figures from data center operators, who have commercial reasons to keep energy use opaque, it builds from semiconductor supply data. Stanford’s Human-Centered Artificial Intelligence institute vetted and republished the figure as a central reference point in its annual report, released in April 2026.

The number should be understood as installed capacity, not a real-time meter reading. At any given moment, some fraction of those chips sit idle or run below peak load. But the gap between capacity and consumption is narrowing as AI training runs grow larger and inference workloads, the queries that power chatbots, image generators, and AI-augmented search, multiply. A gigawatt, for readers unfamiliar with the unit, is 1,000 megawatts, enough to supply roughly 750,000 American homes under typical conditions. At 29.6 gigawatts, the global AI data center fleet could theoretically power more than 22 million homes.

Demand is doubling every 18 months

The International Energy Agency’s Energy and AI report documents the growth trajectory. Under current investment patterns and policy frameworks, the IEA projects that global data center electricity demand tied to AI could roughly double every 18 months through the latter half of this decade. Training clusters for frontier models and large-scale inference services account for the bulk of that increase.

The agency maintains a dedicated Energy and AI Observatory that aggregates regional indicators: data center capacity additions, power use effectiveness ratios, and growth rates by market. The United States dominates the buildout. Hyperscale campuses are concentrated in states with relatively cheap electricity and permissive permitting, particularly Virginia, Texas, and parts of the Pacific Northwest.

The strain is already visible in grid interconnection queues. PJM Interconnection, the regional transmission organization that manages the grid across 13 states and the District of Columbia, reported in 2024 that data center requests had swelled its queue to record levels, with wait times for new connections stretching years. Dominion Energy, the primary utility in northern Virginia’s “Data Center Alley,” has warned that load growth in its territory is outpacing its ability to build new generation and transmission infrastructure. These are not projections. They are operational realities that utilities are grappling with as of mid-2026.

What the comparison to New York actually tells us

Comparing a global installed capacity figure to a single state’s momentary peak demand is, admittedly, an apples-to-oranges exercise. New York’s 33,955-megawatt record reflects the highest simultaneous draw across every home, office, factory, and subway car in the state during one sweltering afternoon. The 29.6-gigawatt AI figure reflects the total power that AI-linked data centers could draw if every facility ran at full tilt, spread across dozens of countries.

But the comparison is useful precisely because it conveys scale in terms people can grasp. Most Americans have some intuitive sense of how much electricity a large, densely populated state consumes. Learning that a technology sector that barely registered on utility planning documents five years ago now rivals that consumption forces a recalibration of assumptions about where future electricity demand will come from.

New York’s own grid outlook underscores the tension. The Department of Public Service’s summer reliability forecasts still use the 2013 peak as a planning benchmark, and reserve margins have tightened as building electrification and electric vehicle charging layer onto baseline consumption. The state does not yet break out AI-specific load additions as a separate line item in its filings, a gap that grid analysts say needs to close as data center proposals multiply in the Hudson Valley and on Long Island.

The efficiency question

Not everyone agrees the doubling curve will hold. Chip designers are shipping more energy-efficient accelerators with each generation. Nvidia’s Blackwell architecture, for instance, delivers substantially more computation per watt than its predecessor. Liquid cooling systems, which transfer heat far more effectively than traditional air conditioning, are becoming standard in new AI facilities and can cut a data center’s total energy overhead by 20 to 40 percent, according to industry estimates.

At the same time, efficiency gains have historically been swallowed by demand growth, a dynamic economists call the Jevons paradox. Cheaper, faster inference makes AI useful in more applications, which drives more queries, which requires more chips. Whether aggregate electricity consumption tracks more closely with the number of chips deployed, the volume of user queries, or the sophistication of models remains an open question that no current dataset can definitively answer.

Major cloud providers are hedging by investing in new power sources. Microsoft has signed agreements to purchase nuclear energy, including a deal to restart a unit at the Three Mile Island plant in Pennsylvania. Google has invested in geothermal startups and signed the largest corporate clean-energy power purchase agreement on record. Amazon has acquired a nuclear-powered data center campus in Pennsylvania and is exploring small modular reactors. These moves reflect a recognition that the existing grid cannot absorb AI’s appetite without new generation, and that relying solely on renewables may not provide the round-the-clock baseload these facilities require.

What ratepayers and regulators should watch

For households and businesses, the practical concern is straightforward. AI data centers are increasingly competing for the same grid capacity that powers homes, hospitals, and factories. In regions where new generation and transmission lag behind demand, large clusters of AI facilities can tighten reserve margins and push up wholesale electricity prices, costs that eventually filter into retail bills.

Regulators in several states are beginning to respond. Virginia’s State Corporation Commission has scrutinized whether data center operators should bear a larger share of transmission upgrade costs. Georgia regulators clashed with Georgia Power over plans to add gas-fired generation partly to serve data center load. In Texas, the Electric Reliability Council has flagged data centers as a significant variable in its long-term demand forecasts.

At the federal level, the Department of Energy has called for better disclosure of data center energy use and more granular tracking of AI-specific workloads. Without that transparency, planners are working from estimates and models rather than measured consumption, a precarious foundation for decisions about billions of dollars in grid investment.

The 29.6-gigawatt figure and the New York comparison offer a useful, if imperfect, snapshot of where things stand in mid-2026. The numbers will almost certainly look different a year from now. The question is whether the grid infrastructure, the regulatory frameworks, and the public investment needed to support this growth will keep pace, or whether the AI industry’s appetite for electricity will outrun the systems built to deliver it.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.