Anthropic has signed a multiyear deal with CoreWeave to supply GPU computing power for its Claude AI models, according to Bloomberg. The agreement pairs one of the most closely watched AI startups with a cloud infrastructure company that went public in March 2025 and has quickly become a preferred provider for firms that need massive clusters of Nvidia chips without building their own data centers.
The deal comes just days after Anthropic and Google Cloud jointly announced a separate expansion that will bring “multiple gigawatts” of TPU capacity online starting in 2027. Together, the two moves amount to a sweeping infrastructure buildout, one that positions Claude to run across at least three major providers: CoreWeave, Google Cloud, and Amazon Web Services.
Why CoreWeave, and why now
CoreWeave has emerged as the go-to alternative to hyperscale clouds for AI companies that want dedicated Nvidia GPU clusters. The company operates data centers across the United States and Europe stocked with Nvidia’s latest hardware, and it has already locked in a reported $11.9 billion contract with OpenAI, Anthropic’s chief rival. Adding Anthropic to its client roster cements CoreWeave’s position at the center of the AI infrastructure race.
For Anthropic, the logic is partly about diversification. The company has relied heavily on AWS, backed by up to $8 billion in committed investment from Amazon. But depending on a single cloud provider creates risk: pricing leverage shifts to the vendor, and any capacity crunch or technical limitation on one platform can bottleneck an entire product line. By contracting directly with CoreWeave for Nvidia GPU access, Anthropic gains a second major source of the chips that remain the industry standard for training large language models.
Neither Anthropic nor CoreWeave has released a press statement confirming the deal’s dollar value, total GPU count, or specific data center locations. Bloomberg’s reporting carries significant weight, but readers should note that the precise financial and operational terms remain unconfirmed by either company as of May 2026.
The Google Cloud expansion adds a different kind of chip
The CoreWeave agreement is only half of Anthropic’s infrastructure push. A joint announcement distributed through PRNewswire confirmed that Anthropic is expanding its use of Google Cloud and Google’s custom TPU processors, which are designed in-house by Google and manufactured with Broadcom silicon. The scale is striking: “multiple gigawatts” of TPU capacity, with delivery beginning in 2027.
To put that in perspective, a single gigawatt can power a data center campus housing hundreds of thousands of advanced AI chips. The commitment dwarfs most existing deployments and signals that Anthropic expects Claude’s computational appetite to grow dramatically over the next several years, driven by larger models, more users, and new capabilities.
TPUs and Nvidia GPUs serve overlapping but distinct roles. GPUs are the default for training frontier models and handle a wide range of AI workloads. TPUs, purpose-built by Google, can offer cost and efficiency advantages for specific training and inference tasks. Running both gives Anthropic the flexibility to route different workloads to whichever hardware delivers the best performance per dollar.
What this means for Claude users
For the millions of people and businesses that use Claude for coding, research, writing, and customer service, the practical implications are straightforward. More compute generally means faster responses, the ability to serve more simultaneous users without degradation, and the capacity to train larger, more capable model generations. If both the CoreWeave and Google Cloud commitments proceed as planned, Anthropic will have one of the most diversified infrastructure footprints of any AI company outside of Google and Microsoft.
That said, infrastructure is expensive, and Anthropic is not yet a profitable company. The startup has raised over $15 billion in funding through early 2025, including major rounds from Google and Amazon. Locking in multiyear compute contracts at this scale is a bet that revenue from Claude subscriptions, API access, and enterprise deals will grow fast enough to justify the spending. If demand plateaus or compute costs spike, those commitments could become a financial burden. If demand keeps climbing, they could look like bargains.
The bigger picture: AI’s industrial phase
Anthropic’s twin deals reflect a broader shift in how AI companies operate. Building a frontier model is no longer just an engineering challenge; it is an industrial logistics problem. Training runs consume megawatts of electricity for weeks or months. Inference at scale requires chips, cooling, and power that must be reserved years in advance. The companies that secure reliable access to that infrastructure will have a structural advantage over those that do not.
This is why the competition for compute has intensified so sharply. OpenAI has its massive CoreWeave contract and deep ties to Microsoft Azure. Google trains its Gemini models on its own TPU infrastructure. Meta operates one of the largest private GPU fleets in the world. Anthropic, which lacks the in-house hardware manufacturing of a Google or the captive cloud relationship of an OpenAI-Microsoft pairing, is assembling its compute portfolio deal by deal, spreading risk across Nvidia GPUs at CoreWeave, TPUs at Google Cloud, and existing capacity on AWS.
The strategy carries a hedge against supply chain concentration, too. Nvidia dominates the AI chip market, and any disruption to its manufacturing pipeline or advanced packaging supply could ripple across the industry. By securing long-term access to both Nvidia-based systems and Google’s custom TPUs, Anthropic insulates Claude from at least some of that single-vendor risk.
Key gaps that remain
Several important questions are still unanswered. How does the CoreWeave deal interact with Anthropic’s AWS relationship? Is the new GPU capacity earmarked for training, inference, or both? What are the financial terms, and how do they compare to CoreWeave’s reported OpenAI contract? On the Google Cloud side, the 2027 timeline for TPU delivery is a target, not a guarantee; data center construction across the industry has faced permitting delays, power grid constraints, and supply chain bottlenecks that have pushed back timelines repeatedly.
Until Anthropic or its partners release more detailed disclosures, the full shape of the company’s infrastructure strategy will remain partially obscured. What is clear, even from the available evidence, is that Anthropic is planning for a future where Claude’s success depends as much on securing chips and electricity as it does on breakthroughs in model architecture. The race to build the best AI is increasingly a race to power it.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.