Morning Overview

Big Tech commits $725 billion to AI infrastructure in 2026, eclipsing GDP of most nations

Five years ago, the largest American technology companies spent roughly $150 billion a year building out their digital empires. In 2026, that figure has ballooned to as much as $725 billion, according to aggregated capital expenditure guidance disclosed during the first-quarter earnings cycle that closed in late April. The sum exceeds the entire gross domestic product of countries like Switzerland or Poland. It rivals the U.S. federal defense budget. And it is being directed, overwhelmingly, at a single bet: that artificial intelligence will reshape computing so profoundly that any company without massive infrastructure will be left behind.

The numbers behind the surge

Meta Platforms delivered the most striking individual disclosure. In its first-quarter investor filing, the company raised its full-year capex guidance to between $125 billion and $145 billion, up from roughly $38 billion just two years earlier. That range alone would rank Meta’s infrastructure budget above the GDP of Ecuador or Kenya. The filing ties nearly all of the spending to data center construction and the specialized computing hardware needed to train and run large AI models.

Meta is not an outlier. Microsoft signaled approximately $80 billion in capital spending for its fiscal year, much of it flowing into Azure cloud regions optimized for AI workloads. Amazon’s capital expenditure guidance topped $100 billion, driven by AWS expansion and custom chip development. Alphabet indicated roughly $75 billion, with CEO Sundar Pichai telling analysts that the company sees “no scenario” in which it is spending too much on AI infrastructure. Together with smaller but still substantial commitments from companies like Apple and Oracle, the sector’s combined outlay crossed the $700 billion mark and kept climbing.

To appreciate the scale: the entire U.S. Department of Defense budget for fiscal year 2025 was approximately $886 billion. A handful of private corporations are now channeling a comparable amount into server farms, custom silicon, networking gear, land acquisition, and the electrical infrastructure to power it all. Each dollar committed today locks in years of downstream energy consumption and maintenance costs, making these decisions extraordinarily difficult to reverse once concrete is poured and turbines are spinning.

Where the money is actually going

The spending is not abstract. It translates into physical projects scattered across three continents. Meta has disclosed plans for multiple data center campuses in the U.S. and abroad, some exceeding two million square feet. Microsoft is expanding Azure regions in the American South and Midwest, where electricity is cheaper and land is available, while also building capacity in Sweden and Japan. Amazon has signed long-term power purchase agreements with nuclear and solar providers to guarantee energy for new AWS facilities in Virginia, Oregon, and overseas.

Custom chips are consuming a growing share of the budgets. All four of the largest spenders now design proprietary AI accelerators to reduce their dependence on Nvidia, whose GPUs remain the industry’s default training hardware. Meta’s MTIA chips, Amazon’s Trainium processors, Google’s TPUs, and Microsoft’s Maia accelerators are all scaling up in 2026, though Nvidia’s next-generation Blackwell architecture continues to command enormous orders. Jensen Huang, Nvidia’s CEO, has described the current environment as a “trillion-dollar data center buildout” that will play out over the next several years.

The electrical demands are staggering. A single large AI training cluster can draw as much power as a small city. Utilities in Virginia’s “Data Center Alley,” already the densest concentration of server farms on Earth, have warned that new interconnection requests could strain the regional grid. Similar concerns have surfaced in Ireland, the Netherlands, and parts of Texas. Several tech companies have responded by investing directly in power generation, signing deals for small modular nuclear reactors, geothermal plants, and dedicated solar and wind farms that bypass the public grid entirely.

Why executives say they cannot slow down

The logic driving these commitments is straightforward, if unproven at this scale. AI models are growing larger and more compute-intensive with each generation. Training a frontier model in 2026 requires orders of magnitude more processing power than it did in 2023, when ChatGPT first triggered the current arms race. Inference, the process of actually running a trained model to serve users, is scaling even faster as AI features spread into search, advertising, productivity software, and consumer devices.

Executives across the sector have framed the spending as existential. Meta CEO Mark Zuckerberg has said publicly that underinvesting in AI infrastructure would be a bigger risk than overinvesting. Microsoft CEO Satya Nadella has described AI as the most important platform shift since the internet. Amazon CEO Andy Jassy has pointed to AWS customer demand for AI compute as growing faster than any workload in the company’s history.

Wall Street, for now, is largely buying the argument. Shares of all four major spenders held steady or rose after the capex announcements, suggesting that investors had already priced in aggressive spending and were reassured by the specifics. But stock price reactions reflect trader sentiment, not validation of the underlying strategy. The real test will come when these data centers are operational and the companies must demonstrate that AI services can generate revenue proportional to the investment.

The risks no one can fully price

The $725 billion figure rests on guidance, not guaranteed outlays. Tech companies have historically revised capex plans downward when revenue growth stalls or supply chains buckle. Semiconductor shortages, permitting delays for new data centers, and rising electricity costs could all force mid-year adjustments. The numbers reflect management intent as of late April 2026, and they may prove sensitive to macroeconomic shifts, regulatory pushback, or a simple reassessment of demand.

Return on investment remains the largest open question. None of the available filings quantify the near-term revenue these data centers are expected to generate. The absence of concrete payback timelines means investors and analysts are, for now, taking management at its word that the demand curve justifies the outlay. If AI services fail to command premium pricing, or if enterprise adoption plateaus, the same infrastructure that looks like a strategic moat today could resemble expensive excess capacity within a few years.

There is also the question of how much of this spending is genuinely new. Some portion of the announced capex would have occurred regardless of the AI boom, as companies routinely refresh servers, expand storage, and upgrade networking. Without more granular disclosure, it is difficult to separate baseline infrastructure maintenance from AI-specific buildout. That distinction matters for policymakers and investors trying to assess whether this cycle is truly transformative or partly a relabeling of routine capital replacement.

What $725 billion means beyond the balance sheet

The consequences of this spending wave extend well past quarterly earnings. Electrical utilities, construction firms, and semiconductor manufacturers are already adjusting their own capacity plans in response to tech-sector demand. Bechtel, one of the largest U.S. construction companies, has said data center work now represents a significant and growing share of its pipeline. Utility executives in the Southeast and Mid-Atlantic have testified before state regulators that projected load growth from data centers is unlike anything they have planned for in decades.

Communities near proposed sites face sharp trade-offs. Data centers bring construction jobs, property tax revenue, and long-term maintenance employment, but they also consume vast quantities of water for cooling, generate low-frequency noise, and place heavy demands on aging electrical grids. Local planning boards from northern Virginia to central Ohio are negotiating conditions that would have seemed exotic five years ago: dedicated substations, water recycling mandates, and noise abatement walls surrounding facilities the size of aircraft hangars.

Geopolitically, the spending reinforces a widening gap between U.S. and Chinese AI infrastructure. American export controls have restricted China’s access to the most advanced AI chips, and Beijing’s own tech giants, while investing heavily, are operating under tighter capital constraints and less access to cutting-edge semiconductor manufacturing. The $725 billion figure is, in part, a statement of intent: that the United States plans to maintain its lead in AI compute capacity through sheer scale of private investment, even as questions mount about whether the returns will justify the cost.

The most telling indicators will arrive in the coming quarters. Construction permits, utility interconnection agreements, and chip procurement contracts are all public or semi-public records that can confirm whether announced budgets are converting into physical projects. Until shovels are in the ground and power is flowing, the $725 billion remains a projection. But as a statement of corporate ambition, it is already reshaping industries, communities, and the global competition for technological dominance in ways that will take years to fully measure.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.