Morning Overview

Anthropic has now committed over $200 billion to cloud infrastructure and chips — largely through a massive Google Cloud pact

Anthropic, the AI company behind the Claude chatbot, has stacked up cloud and chip commitments that now exceed $200 billion, with the vast majority flowing to Google Cloud under a deal that ranks among the largest infrastructure contracts in tech history. The sheer concentration of that spending on a single provider has put Anthropic at the center of a federal investigation into whether Big Tech’s grip on AI computing power is choking off competition before the industry fully takes shape.

The commitment is staggering relative to Anthropic’s own size. The company was valued at roughly $61.5 billion after a funding round in early 2025. Pledging more than three times that figure in future cloud spending underscores how capital-intensive frontier AI development has become, and how dependent even the best-funded startups are on the infrastructure controlled by a handful of tech giants.

The Google Cloud deal at the core

The centerpiece is a multi-year Google Cloud agreement reportedly worth more than $100 billion, a figure first detailed by The Information in early 2025. Under the arrangement, Anthropic commits to purchasing vast quantities of computing capacity, including access to Google’s custom-designed Tensor Processing Units (TPUs), the specialized chips that power much of Claude’s training and inference workloads. Layered on top are separate procurement agreements for additional chips and infrastructure services that push the combined total past the $200 billion threshold, according to industry reporting.

These are not lump-sum payments. Cloud deals of this magnitude typically spread minimum spending guarantees across five to ten years, meaning Anthropic’s annual cash outflow is a fraction of the headline number. Still, the commitments are binding enough to shape how Anthropic allocates resources for the foreseeable future, and large enough to influence how Google Cloud prices and rations capacity for every other customer on its platform.

Anthropic’s relationship with Google is also not purely transactional. Alphabet, Google’s parent company, has invested more than $2 billion directly into Anthropic, creating a hybrid arrangement in which the cloud provider is simultaneously a major investor, a supplier of critical hardware, and a distribution partner. Amazon holds a parallel position: it has committed up to $8 billion in investment capital to Anthropic and hosts Claude models on Amazon Web Services. That dual dependency on the two largest cloud platforms after Microsoft Azure gives Anthropic unusual reach but also unusual exposure to the competitive dynamics regulators are now scrutinizing.

The FTC investigation

The Federal Trade Commission has taken a direct interest in these arrangements. In January 2024, the agency used its Section 6(b) authority to launch a formal inquiry into generative AI investments and partnerships, issuing compulsory orders to Microsoft, Amazon, Alphabet, Anthropic, and OpenAI. The orders required each company to turn over internal documents detailing the strategic rationale, financial terms, and governance structures behind their AI cloud deals. The Google-Anthropic, Amazon-Anthropic, and Microsoft-OpenAI relationships were singled out as primary targets.

Section 6(b) orders are not requests. They carry legal force, and the information companies provide under them can feed directly into enforcement actions or new regulations. The FTC has deployed the same tool in past investigations of pharmacy benefit managers and hospital consolidation, sectors where a small number of players controlled bottleneck resources. By reaching for it here, the agency signaled that it views AI infrastructure concentration as a systemic competition issue, not a routine commercial matter.

In January 2025, the FTC published a staff report synthesizing what it learned. The analysis described common patterns across the deals it examined: multi-year spending floors, preferential access to specialized chips, and contractual terms that make it expensive for AI developers to shift workloads to competing providers. The report stopped short of recommending specific enforcement actions, but it laid out a framework for understanding how these partnerships could entrench dominant cloud providers, limit rivals’ access to essential computing resources, and give preferred partners advantages that smaller firms cannot realistically match.

Why the scale matters beyond Anthropic

Training a frontier AI model requires thousands of specialized processors running continuously for weeks or months. Only a small number of providers can deliver that capacity reliably. AWS, Google Cloud, and Microsoft Azure remain the dominant three, though Oracle Cloud Infrastructure and newer entrants like CoreWeave have begun competing for large AI workloads. When a single AI company locks in a substantial share of one provider’s current and future chip supply, the pool of available compute for everyone else shrinks.

That dynamic is what makes Anthropic’s commitments a bellwether. If one startup can absorb $200 billion worth of capacity from a leading cloud provider, the practical question for every other AI developer is whether enough supply remains at competitive prices. Open-source research groups, university labs, and smaller startups already struggle to secure GPU and TPU time during periods of peak demand. A deal of this scale could widen that gap, or it could prompt cloud providers to expand capacity aggressively enough that supply keeps pace. The outcome depends on details that remain confidential: how much of Google Cloud’s total TPU roadmap is earmarked for Anthropic, whether other customers face allocation limits as a result, and how pricing adjusts as demand grows.

Neither Anthropic nor Google has publicly addressed those questions in detail. Company communications have stayed at the level of high-level talking points about the partnership’s strategic value, without offering specifics on interoperability commitments, exclusivity terms, or steps to ensure that smaller developers retain meaningful access to the same hardware.

What regulators and the industry are watching for next

As of mid-2026, the FTC has not filed formal antitrust complaints against any of the companies named in its inquiry. But the agency’s posture suggests it is building a foundation for potential action rather than closing the book. Observers tracking the investigation are watching for several signals: additional compulsory information requests, public statements from commissioners foreshadowing remedies, or formal complaints targeting specific contractual provisions in the cloud deals.

Any conditions eventually imposed on the Google-Anthropic or Amazon-Anthropic relationships could ripple across the entire cloud market. If regulators require shorter contract terms, prohibit exclusivity in chip allocation, or mandate interoperability standards, those rules would likely reshape how every cloud provider negotiates with AI customers. The stakes extend well beyond one company’s balance sheet. What the FTC decides about these partnerships will help determine whether the infrastructure layer of AI remains concentrated among a few incumbents or opens up enough for a broader set of competitors to build on.

For now, the clearest takeaway is that the scale of Anthropic’s commitments has outpaced the public’s ability to evaluate them. The FTC has seen the contracts. The rest of us are working from fragments. That gap between what regulators know and what the market can see is itself a competitive issue, and closing it may matter as much as any enforcement action that follows.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.