Morning Overview

Anthropic’s revenue just passed OpenAI’s for the first time — $30 billion run rate while spending 4x less on training

Anthropic, the maker of Claude, has reached a $30 billion annualized revenue run rate, a figure that, based on the most recent publicly reported numbers from its chief rival, appears to place it ahead of OpenAI in top-line revenue. The company disclosed the milestone in connection with a deal, confirmed by Bloomberg in April 2026, under which Broadcom will ship Google-designed TPU chips directly to Anthropic. The arrangement gives the company a training and inference pipeline built on custom silicon rather than the Nvidia GPUs that dominate the rest of the industry, and it goes a long way toward explaining how Anthropic is scaling so aggressively while reportedly spending a fraction of what OpenAI allocates to compute.

The numbers behind the milestone

The $30 billion run rate is the hardest number in the story. It surfaced alongside the Broadcom-TPU agreement and has not been disputed by Anthropic or contradicted by subsequent reporting. For context, OpenAI was widely reported to be generating roughly $12.7 billion in annualized revenue as of late 2025, according to The Information and the Financial Times. OpenAI has not published an updated figure for mid-2026, so the exact gap is difficult to pin down. But even accounting for OpenAI’s rapid growth, Anthropic’s disclosed number represents a striking acceleration that few observers predicted a year ago.

Separately, Anthropic closed a $30 billion Series G funding round at a reported $380 billion valuation, according to Reuters. That makes it one of the most valuable private technology companies on the planet. The same Reuters report confirmed that Goldman Sachs had partnered with Anthropic to deploy AI agents for banking tasks, including compliance workflows and trade reconciliation. Landing a Wall Street institution of that size signals that Claude-based products have cleared the security and reliability bar in one of the most heavily regulated industries in the world.

Why the TPU bet changes the math

Training a frontier AI model is breathtakingly expensive, and the single biggest variable is the cost of compute. Most leading labs rely on Nvidia’s H100 and B200 GPUs, chips that have been in such high demand that pricing has taken on an auction-like quality. Anthropic chose a different path. By building its infrastructure around Google’s Tensor Processing Units and securing a direct procurement channel through Broadcom, the company sidesteps the GPU scarcity that has inflated costs across the industry.

Google’s TPUs are purpose-built for machine-learning workloads. Because they are manufactured and allocated within Google’s own supply chain, they are not subject to the same bidding wars that drive Nvidia GPU prices higher. The practical result: Anthropic can run large-scale training jobs at a lower per-unit cost. That means more experimental runs, faster iteration on model quality, and quicker deployment of new versions to paying customers, all without proportionally increasing capital expenditure.

The headline claim that Anthropic spends “4x less on training” than OpenAI is worth treating carefully. Neither company publishes detailed training-cost breakdowns, and no audited comparison of their compute budgets exists publicly. The figure likely originates from analyst estimates or investor briefings rather than verified financial documents. What is clear from the Broadcom deal is that Anthropic has locked in a structurally cheaper hardware pipeline. Whether the precise ratio is 4x, 3x, or something else, the directional advantage is real.

For enterprise clients, this cost structure matters directly. A bank deploying Claude-based agents across thousands of workflows needs confidence that the AI provider can sustain service without dramatic price hikes tied to GPU scarcity. Anthropic’s TPU arrangement offers that kind of supply-chain predictability, and it makes multi-year enterprise contracts easier to negotiate and price.

What Anthropic is actually selling

Revenue at this scale does not come from chatbot subscriptions alone. Anthropic’s growth has been driven heavily by enterprise API access to Claude and, increasingly, by AI agent products designed for specific business functions. The Goldman Sachs partnership is the most visible example, but Anthropic has also deepened its relationship with Amazon Web Services, which has invested billions in the company and offers Claude models through its Bedrock platform. That dual-cloud positioning, with infrastructure on Google TPUs and distribution through AWS, gives Anthropic reach into two of the three largest cloud ecosystems simultaneously.

The enterprise focus also helps explain the run rate’s size. Large corporate contracts tend to involve committed spend, volume guarantees, and multi-year terms. If a meaningful share of Anthropic’s revenue comes from these kinds of agreements rather than pay-as-you-go API usage, the $30 billion figure may be more durable than a simple annualization of one strong quarter would suggest. That said, no source in the available reporting breaks down how much of the run rate is recurring versus consumption-based, so the question of durability remains open.

The Google question

Anthropic’s relationship with Google adds a layer of complexity. Google is both a major investor in Anthropic and the designer of the TPU chips Anthropic now receives through Broadcom. Whether the deal includes preferential pricing, volume commitments, or exclusivity terms has not been disclosed. If Google’s support effectively subsidizes Anthropic’s compute costs, the spending comparison with OpenAI reflects a strategic subsidy as much as pure operational efficiency. That distinction matters for anyone trying to assess whether Anthropic’s economics could be replicated by a competitor without a similar backer.

It also raises a question about independence. Anthropic has positioned itself as a safety-focused alternative to OpenAI, but its infrastructure now depends heavily on Google’s silicon roadmap. If Google were to reprioritize TPU allocation toward its own DeepMind models or adjust the terms of supply, Anthropic’s cost advantage could narrow. For now, the two companies’ interests appear aligned, but the arrangement is worth watching as the competitive landscape shifts.

Where the AI revenue race stands in mid-2026

Six months ago, the conventional wisdom held that OpenAI’s head start, its consumer brand recognition through ChatGPT, and its deep integration with Microsoft would keep it comfortably in the revenue lead among pure-play AI companies. Anthropic’s $30 billion run rate disrupts that narrative. It suggests that enterprise adoption, not consumer subscriptions, may be the faster path to scale in this market, and that hardware strategy can be as decisive as model quality in determining which company grows fastest.

None of this means the race is settled. OpenAI is pursuing its own enterprise push through ChatGPT Enterprise and Microsoft’s Copilot ecosystem. Google’s DeepMind division has its own frontier models and captive TPU access. And the broader competitive field, including Meta’s open-source Llama models and a wave of well-funded startups, continues to pressure pricing and push the pace of innovation. Anthropic’s lead, if it holds, will need to be defended quarter by quarter.

What the Broadcom deal and the $30 billion figure do confirm is that the economics of building a frontier AI company are not fixed. Anthropic has shown that a different hardware strategy can produce a different cost structure, and that a different cost structure can accelerate revenue growth in ways that catch even close observers off guard. Whether that advantage compounds or erodes will depend on decisions that have not yet been made, by Anthropic, by Google, and by every competitor watching the numbers and recalculating its own plans.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.