OpenAI is reorienting its business around paying enterprise clients, a strategic shift that its chief financial officer has described publicly and that reflects growing pressure from Anthropic, the rival AI company that has been quietly building a corporate customer base of its own.
In an interview reported by the Associated Press in early 2025, OpenAI CFO Sarah Friar discussed the company’s direction. The AP reported that Friar described OpenAI as moving toward business-oriented products as a path to profitability and acknowledged that the majority of ChatGPT’s users do not pay for the service. No direct quotes from the interview have been independently verified for this article, so readers should consult the AP’s original reporting for Friar’s precise language. The underlying point, however, is clear from the AP’s account: serving hundreds of millions of free users requires enormous compute infrastructure, and only a small fraction convert to paid plans, making the enterprise pivot a financial imperative rather than a preference.
Why the enterprise push matters now
OpenAI’s consumer brand is enormous. ChatGPT crossed 100 million weekly active users faster than almost any product in tech history. But consumer subscriptions do not scale the same way that six- and seven-figure enterprise contracts do, especially when the cost of running large language models remains steep.
The AP reported that Friar signaled OpenAI plans to introduce a new model designed for what the company calls “high-value professional work.” No release date, pricing structure, or detailed capability list has been disclosed. The framing suggests a product aimed at complex analysis, large-scale code generation, and domain-specific tasks that corporate buyers will pay premium rates for, rather than the general-purpose chatbot experience that made OpenAI a household name.
Anthropic’s enterprise playbook
While OpenAI announces its intentions, Anthropic has been publishing data on its own adoption. The company’s Economic Index, released in November 2025 as arXiv preprint 2511.15080, maps how its Claude model is used across enterprise workflows, including data analysis, document drafting, and process automation. ArXiv is a preprint repository, not a peer-reviewed journal, so the paper has not undergone formal academic review. That said, the report is structured with transparent methodology and explicit limitations, which lends it more weight than a typical vendor white paper.
Anthropic has also secured substantial financial backing for its enterprise ambitions. Amazon has invested roughly $8 billion in the company, and Claude is available through Amazon Web Services, giving it a direct channel into the corporate IT budgets that both companies are chasing.
The Economic Index documents uneven geographic adoption but confirms that Claude has gained ground in business settings where reliability, safety credentials, and integration with existing systems matter more than consumer brand recognition. Anthropic has cultivated a reputation as the safety-focused alternative in generative AI, a positioning that appeals to risk-conscious enterprise buyers in regulated industries like finance, healthcare, and legal services.
The competitive landscape is wider than two companies
Framing this as a two-horse race understates the complexity of the enterprise AI market. Google’s Gemini models are integrated into Google Cloud and Workspace, giving them a built-in distribution advantage with the millions of businesses already paying for Google services. Microsoft, OpenAI’s largest investor and partner, sells AI capabilities through its Copilot products and Azure OpenAI Service, meaning that some of OpenAI’s enterprise revenue flows through Microsoft’s sales channels rather than directly. And Meta’s open-source Llama models have attracted enterprise users who want to run AI on their own infrastructure without vendor lock-in.
That crowded field makes OpenAI’s pivot more urgent. The company cannot rely on ChatGPT’s consumer fame to win corporate contracts when buyers have multiple credible alternatives, each with distinct pricing, deployment, and data-handling models. Enterprise procurement decisions tend to hinge on integration, compliance, and total cost of ownership rather than on which chatbot went viral first.
What remains uncertain
OpenAI has not released specific enterprise customer counts, retention rates, or a breakdown of revenue by segment. The AP’s account of Friar’s statements signals strategic direction, but the gap between announcing a pivot and executing one can be wide. Without hard numbers, it is difficult to assess whether enterprise contracts are already offsetting the cost of the free tier or whether the company is still subsidizing ChatGPT’s massive user base with investor capital.
Anthropic’s Economic Index, while methodologically structured, draws from its own customer data. It captures Claude’s adoption among existing users, not a market-wide comparison. No independent third-party audit of Anthropic’s adoption figures has been published, and the report does not quantify how Claude’s enterprise usage compares in absolute terms to OpenAI, Google, or any other provider.
The AP reported that Friar’s comments suggest OpenAI views its free consumer tier as unsustainable at current scale, but the company has not announced plans to eliminate free access to ChatGPT. The distinction matters for the hundreds of millions of people who rely on the tool without paying. Any material change, whether through stricter usage limits, more aggressive upselling, or outright paywalls, would reshape who benefits from generative AI and how often they can use it.
What enterprise buyers should watch for through mid-2026
For businesses deciding where to allocate AI budgets between now and mid-2026, the signals are clear even if the details are not. OpenAI is telling the market that its best tools will increasingly be designed and priced for professional use. Anthropic is building a public record of enterprise adoption that prospective customers can scrutinize. Both companies want the same contracts.
Companies evaluating AI procurement should request detailed usage, performance, and security data from both providers before committing. Published preprints, even well-structured ones, deserve the same scrutiny as any vendor’s self-reported metrics. Questions about data handling, uptime guarantees, and long-term pricing stability may matter more than marginal differences in benchmark scores.
The broader pattern is an AI industry moving past its initial consumer growth phase and into a period where recurring enterprise revenue, not user counts, determines which companies survive. OpenAI’s pivot is an explicit acknowledgment that viral popularity does not, by itself, produce a sustainable business. Anthropic’s November 2025 research paper is a bid to prove that its own strategy, embedding Claude into concrete corporate workflows, already generates the kind of durable usage that justifies billions in investment. As both companies sharpen their enterprise offerings through mid-2026, the trade-offs between openness, affordability, and profitability will only grow starker.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.