Morning Overview

Nvidia invests $2B in Nebius, a $28B AI cloud provider

Nvidia is putting $2 billion into Nebius Group N.V., an Amsterdam-based AI cloud provider, through a pre-funded warrant that would give the chipmaker an equity stake in the company. The deal, disclosed in a regulatory filing on March 11, 2026, marks a sharp escalation from Nvidia’s earlier involvement with Nebius and reflects Nvidia’s push to deepen ties with cloud partners for its AI hardware, according to the companies’ disclosures and reporting. For Nebius, the capital injection arrives amid intense competition for AI-ready data center capacity across the industry.

How the $2 Billion Deal Is Structured

The investment takes the form of a securities purchase agreement under which Nebius will sell Nvidia a pre-funded warrant for approximately $2 billion. That warrant is exercisable for 21,065,936 Class A ordinary shares at a nominal price of $0.0001 per share, meaning Nvidia has effectively prepaid for the equity. The near-zero exercise price turns the warrant into something close to an outright share purchase, giving Nvidia an ownership position without the typical friction of staged funding rounds or milestone-based payouts.

This structure matters for both sides. Nebius gets the full $2 billion upfront, which it can deploy immediately toward infrastructure buildout. Nvidia, meanwhile, avoids the dilution risk that would come with a convertible note and secures a fixed share count regardless of future price swings. The arrangement can be simpler than a traditional equity round, potentially reducing execution complexity at a time when both companies are moving quickly to add AI capacity.

What the Partnership Actually Covers

Capital is only part of the agreement. A joint announcement filed as an exhibit to the SEC disclosure outlines a technical collaboration that spans several layers of the AI computing stack. The two companies plan to work together on AI factory design and support, inference and agentic AI software, fleet management, and a deployment roadmap that includes Nvidia’s Rubin, Vera, and BlueField product lines.

That scope goes well beyond a typical investor-investee relationship. By embedding its next-generation chip architectures directly into Nebius’s infrastructure planning, Nvidia is effectively co-designing the data centers that will run its hardware. Fleet management cooperation suggests Nvidia will also have input into how Nebius allocates and maintains GPU clusters across its facilities. For customers renting compute from Nebius, this could translate into tighter optimization between the hardware and the cloud software layer, but it also raises questions about how open that stack will be to competing chip vendors and whether alternative accelerators will get equal footing.

From $700 Million to $2 Billion

This is not the first time Nvidia has backed Nebius. In late 2024, Nvidia joined a prior funding round involving the company’s AI cloud services that totaled about $700 million. That earlier deal positioned Nebius as one of a growing number of so-called “neoclouds,” smaller and more specialized alternatives to hyperscalers like Amazon Web Services, Microsoft Azure, and Google Cloud that focus specifically on AI workloads.

The jump from $700 million to $2 billion in roughly 15 months reflects how quickly the competitive dynamics around AI infrastructure have shifted. Hyperscalers have been spending heavily on their own GPU clusters, but they also compete with Nvidia’s other customers for chip supply and, in some cases, develop custom silicon. For Nvidia, investing in independent cloud providers like Nebius creates a parallel distribution channel: one where the cloud operator’s business model depends on buying and deploying Nvidia silicon rather than potentially designing chips in-house that could displace Nvidia’s products over time.

Nvidia’s Broader Cloud Strategy

The Nebius deal fits a broader pattern in how Nvidia is approaching the cloud market. Instead of relying solely on the largest platforms, Nvidia has been steadily expanding its financial and technical ties with specialized providers that sit outside the hyperscaler tier, building an ecosystem of partners whose growth directly feeds demand for its products. This approach gives Nvidia more influence over how its hardware is integrated into end-to-end AI services while diversifying its customer base beyond a handful of dominant buyers.

That strategy carries both advantages and risks. If Nvidia’s investments help create a network of cloud providers that are deeply dependent on its technology, the company gains a reliable outlet for future chip generations. At the same time, it concentrates exposure to a segment of the market that still needs to prove it can compete sustainably on price, scale, and reliability against the largest cloud platforms. As Nvidia leans into this model, the company is also positioning itself more directly inside the broader AI infrastructure supply chain, a shift that has implications for how regulators and enterprise customers view its role.

The strategic expansion of AI infrastructure through Nebius underscores that evolution. When a chip designer invests billions in a cloud operator and co-designs the deployment roadmap, the line between hardware vendor and infrastructure provider blurs. That integration can deliver performance advantages and predictable capacity for customers, but it can also make it harder for those customers to switch to alternative hardware down the road without incurring substantial migration costs.

What Nebius Brings to the Table

Nebius Group N.V. is headquartered in Amsterdam and trades on U.S. exchanges. Its annual report for fiscal year 2024, filed with the SEC in April 2025, details its focus on AI-native cloud platforms and the operational risks associated with rapid data center expansion. The company has positioned itself as a purpose-built alternative to general-purpose cloud providers, concentrating its resources on GPU-dense computing environments designed specifically for training and running AI models, rather than hosting a broad mix of enterprise workloads.

That specialization is what makes Nebius attractive to Nvidia, but it also means the company’s fortunes are tightly linked to sustained demand for AI compute. If the current wave of AI spending slows or customers consolidate their workloads onto hyperscaler platforms, neocloud operators face a more difficult path. The $2 billion from Nvidia provides a significant financial cushion, but it also deepens a mutual dependency: Nebius gains a cornerstone investor whose technology underpins its services, while Nvidia becomes more exposed to the performance of a single, relatively young cloud provider.

Competitive and Customer Implications

For enterprise buyers, the Nvidia–Nebius partnership could offer access to cutting-edge GPUs and networking hardware on infrastructure tuned in close collaboration with the chip designer. That may appeal to organizations building large-scale training clusters or latency-sensitive inference systems that benefit from tight hardware-software integration. At the same time, customers will have to weigh the risk of vendor concentration if their most demanding AI workloads are tied to a stack optimized primarily around Nvidia components.

The deal also lands in a market where investors and customers are watching how quickly neocloud providers can turn capital into usable capacity.

Whether this model becomes a template for future Nvidia investments will depend on how well Nebius executes on its expansion plans and how customers respond to a more tightly integrated hardware-cloud offering. For now, the $2 billion pre-funded warrant cements Nebius as one of Nvidia’s most important partners outside the hyperscaler tier and underscores how central dedicated AI cloud capacity has become to the next phase of the industry’s growth.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.