In Northern Virginia, the largest data center market on Earth, utility Dominion Energy has warned that new facility connections could face multi-year delays because the local grid simply cannot keep up. Across the country, similar bottlenecks are forming as artificial intelligence workloads push electricity demand far beyond what power companies planned for. The result: a tightening supply of the compute capacity that every major technology company is racing to secure.
Federal data now quantifies the scale of the problem. U.S. data center electricity consumption roughly tripled over the past decade, climbing from an estimated 58 terawatt-hours in 2014 to approximately 176 TWh in 2023, according to a technical assessment by Lawrence Berkeley National Laboratory commissioned by the Department of Energy. Under the study’s upper-bound scenario, that figure could reach 600 TWh by 2028, effectively tripling again in just five years.
The DOE’s summary of the findings is blunt: data center expansion and AI applications are forces that long-term U.S. grid planning must now explicitly account for. As of spring 2026, that accounting is still catching up to reality.
Why AI changes the math
Traditional cloud computing and video streaming drove steady growth in data center power use for years, but AI has changed the curve. Training a single large language model can require thousands of accelerated servers running at full load for weeks or months. Running those models at scale once they are trained, a process called inference, adds a sustained baseline of demand that did not exist five years ago.
The hardware itself is far hungrier than conventional servers. A single rack packed with the latest AI accelerators from Nvidia can draw 40 to 100 kilowatts, compared with 7 to 15 kilowatts for a typical enterprise server rack. Multiply that across a hyperscale campus with thousands of racks, and the power draw rivals that of a small city. Cooling those chips compounds the problem: as rack densities climb, traditional air cooling hits physical limits, pushing operators toward liquid cooling systems that require new infrastructure and different power profiles.
The International Energy Agency sees the same pattern globally. In its analysis of energy demand from AI, the IEA projects worldwide data center electricity consumption will roughly double to about 945 TWh by 2030 under its base case, with AI-specific accelerated servers identified as the primary driver of the increase. The United States hosts a disproportionate share of global AI compute, which means it absorbs a disproportionate share of the power burden.
Where the strain hits hardest
National averages obscure the real pressure points. Data centers cluster in regions with favorable tax policies, abundant fiber connectivity, and available land. Northern Virginia’s “Data Center Alley” along the Dulles corridor handles roughly 70% of the world’s internet traffic by some industry estimates. Other hot spots include Dallas-Fort Worth, Phoenix, central Oregon, and parts of the greater Chicago area.
Researchers at the IM3 group and the Electric Power Research Institute have built a granular dataset, published through the DOE’s Office of Scientific and Technical Information, that breaks projected data center loads down to individual counties and grid Balancing Authorities, the regional entities responsible for matching electricity supply with demand in real time. That geographic precision reveals a stark reality: even when national grid capacity looks adequate on paper, specific corridors face severe local imbalances that can delay new connections, drive up electricity rates for all customers, and force operators to look elsewhere.
Dominion Energy’s situation in Virginia is the most visible example, but it is not unique. Utilities in Georgia, Texas, and the Pacific Northwest have all flagged surging data center interconnection requests that strain existing transmission and generation infrastructure. Some have imposed informal moratoriums or extended timelines for new large-load connections, a development that directly limits how quickly AI companies can bring new capacity online.
The scramble for power solutions
Faced with grid constraints, technology companies are pursuing unconventional power sources. Microsoft signed a 20-year agreement with Constellation Energy to restart a unit at the Three Mile Island nuclear plant in Pennsylvania. Amazon acquired a data center campus adjacent to a Talen Energy nuclear facility in the same state. Google has signed agreements with nuclear startup Kairos Power for small modular reactors that do not yet exist commercially.
These deals reflect a recognition that renewable energy procurement alone may not solve the problem. Corporate power purchase agreements for wind and solar can add clean generation to the grid, but they do not guarantee power delivery to a specific data center at a specific hour. Nuclear offers the baseload reliability that AI workloads demand, though new nuclear capacity takes years to build and faces regulatory and cost uncertainties that no purchase agreement can fully resolve.
On the efficiency front, chip designers continue to improve performance per watt with each hardware generation. The LBNL report accounts for these trends in its projections, but the gap between its lower and upper scenarios is wide enough to suggest that technology choices still being made will determine whether efficiency gains can offset raw demand growth. Software optimization matters too: researchers have shown that model architectures, training techniques, and inference strategies can vary by an order of magnitude in energy consumption for equivalent output. Whether the industry prioritizes efficiency or simply chases ever-larger models will shape the trajectory.
What businesses should watch
For companies that rely on cloud computing or AI services, the implications are concrete and near-term. Colocation rates in constrained markets have already risen sharply, and major cloud providers have begun signaling that capacity in certain regions is limited. Compute costs are likely to climb further as power constraints tighten supply, and access to high-performance AI infrastructure may become a competitive advantage rather than a commodity.
The federal government has signaled interest in addressing the gap. Agencies like ARPA-E fund research into advanced computing efficiency and power electronics, and pilot projects exploring co-optimization of data center design with renewable generation and storage are underway. But the scale of public investment remains modest compared with the hundreds of billions of dollars the private sector plans to spend on AI infrastructure over the next several years.
The projection that U.S. data center load could double or triple by 2028 describes a range of outcomes, not a certainty. The lower end implies manageable growth if utilities expand capacity in time and efficiency improvements materialize. The upper end implies a genuine supply crisis in which companies cannot secure enough power to run the servers they have ordered, forcing delays, relocations, or scaled-back plans. The difference between those outcomes depends on decisions being made right now by utilities, regulators, chip designers, and the technology companies whose AI ambitions are driving the surge. Electricity, long treated as an invisible input, has become the bottleneck that determines how fast the AI era can actually move.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.