Nvidia is spending at a pace that signals a fundamental shift in how the company sees itself, moving from a chipmaker that supplies artificial intelligence hardware to an active builder of the physical infrastructure that AI runs on. Capital expenditures nearly doubled in its most recent fiscal year, and the company is now committing tens of billions more through partnerships and acquisitions that extend well beyond silicon. These moves underscore Nvidia’s broader push into AI infrastructure, spanning higher internal capital spending and big-ticket external partnerships and deal activity, though the company has not presented them as a single consolidated spending total.
Capex Nearly Doubled in Fiscal 2026
Nvidia’s annual report for the fiscal year ended January 25, 2026, shows capital expenditures of $6.1 billion, up sharply from $3.4 billion in the prior fiscal year. That 79 percent jump reflects heavy investment in manufacturing capacity and data center infrastructure designed to keep pace with surging demand for AI compute. The filing also states that the company expects capital expenditures to increase again in fiscal year 2027, though it does not specify a target figure or detailed allocation breakdown.
The acceleration is notable because Nvidia has historically operated as a relatively asset-light company. It designs chips and licenses architectures while relying on third-party foundries for fabrication. A capex run rate north of $6 billion, with further increases planned, suggests the company is absorbing more of the physical supply chain into its own operations. That shift carries real financial weight: higher fixed costs, longer payback periods, and greater exposure to demand cycles. But it also gives Nvidia more direct control over production timelines at a moment when AI chip supply remains tight across the industry.
The company’s broader financial disclosures underscore how central this build-out has become to its strategy. In its latest 10-K, Nvidia frames capital investment as a core enabler of its data center and AI platforms rather than a supporting expense. Management highlights spending on networking hardware, systems integration, and supply-chain resiliency alongside traditional chip development, signaling that infrastructure is now a first-order strategic priority.
The $40 Billion Aligned Data Centers Deal
Nvidia’s infrastructure ambitions extend beyond its own factories. According to the Associated Press, a consortium that includes Nvidia and BlackRock agreed to acquire Aligned Data Centers in a transaction valued at roughly $40 billion. The deal adds colocation capacity, meaning physical space, power, and cooling systems where AI workloads actually run. For Nvidia, the acquisition represents a vertical step into the facility layer of the AI stack, a domain traditionally controlled by cloud providers and specialized real estate operators.
This move challenges a common assumption about Nvidia’s business model. Most coverage treats the company as a supplier that benefits when others build data centers. Buying into a data center operator flips that dynamic. Nvidia would hold a stake in both the hardware filling those facilities and the facilities themselves. That kind of vertical positioning raises questions about how cloud providers and colocation competitors will respond, particularly if they perceive Nvidia as both a vendor and a rival for the same infrastructure footprint.
The Aligned transaction also hints at how Nvidia might approach operational challenges it has not historically faced. Colocation providers specialize in securing power, negotiating with utilities, and optimizing building layouts for dense compute clusters. By aligning with an established operator rather than building greenfield sites alone, Nvidia can plug into existing expertise and customer relationships, potentially shortening the learning curve as it moves deeper into the infrastructure business.
A $100 Billion Commitment to OpenAI
Separately, the Associated Press reported that Nvidia signed a letter of intent outlining a potential $100 billion investment tied to OpenAI, the company behind ChatGPT, aimed at expanding OpenAI’s computing power through gigawatt-scale AI data centers. The sheer size of the commitment dwarfs Nvidia’s own annual capex and places the company in the role of co-builder rather than arms dealer in the AI race. A gigawatt of data center capacity is enough to power hundreds of thousands of high-performance GPUs simultaneously, so the partnership implies construction at a scale that few organizations have attempted.
The letter-of-intent structure is worth scrutiny. It signals serious intent but falls short of a binding contract, meaning the final terms, timeline, and capital deployment schedule could still change. For Nvidia, the arrangement creates a direct financial link to one of the largest consumers of its own products. That relationship is symbiotic but also introduces concentration risk: if OpenAI’s growth trajectory slows or its business model shifts, a $100 billion exposure would weigh heavily on Nvidia’s balance sheet.
There is also competitive sensitivity. OpenAI already relies heavily on major cloud platforms for hosting and training its models. Nvidia’s role as a co-investor in dedicated facilities could alter those relationships, potentially shifting some of OpenAI’s future workloads into environments where Nvidia has more influence over design and procurement. That might deepen Nvidia’s moat in AI accelerators but could also intensify concerns among other customers that the company is favoring one ecosystem player over others.
From Chip Vendor to Infrastructure Operator
Taken together, these moves sketch a company that is rapidly integrating across the AI value chain. Nvidia’s own regulatory filings confirm the internal spending trajectory, while the Aligned Data Centers acquisition and the OpenAI partnership show the external dimension. The common thread is control: over manufacturing, over facility capacity, and over relationships with the largest AI model builders.
This strategy carries risks that the prevailing narrative around Nvidia often glosses over. Higher capex means the company must sustain revenue growth to justify the investment. Data center ownership introduces operational complexity, from power procurement to cooling engineering, that chip designers do not traditionally manage. And the OpenAI commitment, even as a letter of intent, ties Nvidia’s fortunes to a single partner at a scale that could distort its capital allocation for years.
Energy costs represent a particularly pressing constraint. Gigawatt-scale data centers require reliable, affordable electricity in quantities that strain local grids. Securing power purchase agreements, navigating permitting processes, and managing the environmental scrutiny that accompanies massive energy consumption are all challenges that Nvidia has limited institutional experience handling. The Aligned Data Centers acquisition may help on this front by bringing existing power contracts and site expertise into the fold, but scaling that capacity to meet the ambitions outlined in the OpenAI partnership is a different order of magnitude.
Regulatory and political scrutiny could follow. Large-scale data center projects increasingly draw attention from local communities concerned about water use, land consumption, and grid reliability. As Nvidia moves from selling chips to owning and co-developing facilities, it will have to engage directly with those stakeholders and with policymakers who may seek to impose new rules on AI infrastructure.
What This Means for the Broader AI Market
Nvidia’s infrastructure push has direct implications for companies that compete with it or depend on it. Cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud have built their AI strategies partly on the assumption that they control the facility layer while Nvidia controls the chip layer. If Nvidia begins operating its own data center capacity or co-owning facilities with major AI labs, that division of labor becomes less clear. Cloud providers may accelerate their own custom chip programs in response, seeking to reduce reliance on a company that is now a potential infrastructure competitor.
For investors, the spending trajectory demands attention to cash flow dynamics. Nvidia’s fiscal year 2026 capex of $6.1 billion is manageable against its current profitability, but layering on multibillion-dollar data center projects and a prospective $100 billion commitment to OpenAI could compress free cash flow if market conditions change. The company will need to demonstrate that these investments either generate direct returns, through infrastructure revenue, or indirectly reinforce its pricing power in AI hardware.
Smaller AI startups and enterprise users may face a more concentrated supply landscape. If Nvidia locks up significant portions of future data center capacity through ownership stakes and strategic partnerships, access to top-tier GPUs could increasingly flow through channels it influences. That might simplify procurement for some customers but could also reduce bargaining power and intensify dependence on a single vendor for both chips and compute.
Yet the upside for the broader ecosystem is substantial if Nvidia executes well. More capacity dedicated to AI workloads could ease the current shortage of advanced accelerators, lower unit costs over time, and enable more ambitious models and applications. By shouldering some of the infrastructure burden alongside cloud providers and AI labs, Nvidia may help expand the overall pie, even as it claims a larger slice for itself.
The company is effectively betting that AI will remain the defining compute paradigm for the next decade and that owning more of the underlying infrastructure will secure its position at the center of that shift. Whether that bet pays off will depend not only on Nvidia’s engineering prowess, but also on its ability to operate like a utility-scale infrastructure provider, managing capital intensity, energy constraints, and complex partnerships at a scale far beyond its roots as a chip designer.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.