Image by Freepik

The leaders of AMD and IBM are looking at the same artificial intelligence boom and seeing something very different from a speculative bubble. They see a once in a generation infrastructure buildout that could push global data center exposure toward 8 trillion dollars, but only if the economics of chips, power, and cloud services evolve fast enough to justify the bill. Their comments frame a pivotal question for investors and policymakers: is this an overhyped frenzy, or the early stage of a durable industrial platform that still has its cost structure badly out of balance?

From my vantage point, the answer is not a simple yes or no on “bubble” talk, but a tension between demand that looks real and infrastructure spending that looks fragile. AMD chief executive Lisa Su is leaning hard into the upside, arguing that the market is still underestimating how deeply Gen AI will penetrate business and consumer life, while IBM chief executive Arvind Krishna is warning that the current trajectory of AI data center investment cannot pay for itself at today’s prices. Taken together, their views sketch a future where AI is inevitable, but 8 trillion dollars of concrete, silicon, and power lines will only make sense if the industry rewrites its own cost curve.

Two CEOs, one AI boom, very different alarms

When I line up the public comments from AMD and IBM, what jumps out is not disagreement over whether AI will matter, but over how sustainable the current spending spree really is. Lisa Su, who leads AMD as CEO, has been explicit that she does not see an AI bubble, arguing that talk of overvaluation is “somewhat overstated” and that the long term demand for accelerated computing will justify today’s aggressive buildouts. Her stance reflects a chipmaker’s vantage point, where every new Gen AI workload, from ChatGPT style assistants to enterprise copilots inside Microsoft 365 and Salesforce, translates directly into orders for GPUs and custom accelerators.

Arvind Krishna, the IBM CEO, is looking at the same wave of orders and sounding a very different alarm. In his view, there is “no way” that spending trillions of dollars on AI data centers will pay off if infrastructure costs stay where they are, a point he has underlined in multiple interviews and public appearances. Krishna is not dismissing AI itself, but he is questioning whether the current economics of cloud compute, networking, and storage can support an 8 trillion dollar exposure without crushing returns for investors and customers. That tension between Su’s optimism and Krishna’s caution is the backdrop for the entire AI buildout debate.

Lisa Su’s emphatic rejection of an AI bubble

Lisa Su has been unusually blunt in pushing back on the idea that AI is in bubble territory, and I read that as a signal of how confident AMD is in the durability of demand. At a recent event, the AMD CEO Lisa Su “emphatically” rejected talk of an AI bubble, describing claims of overhype as “somewhat overstated” and pointing to a pipeline of real world deployments that she expects to accelerate into the second half of next year. Her argument is that enterprises are only at the beginning of integrating Gen AI into core workflows, from automated code generation to customer service agents, and that the hardware cycle will track that software adoption curve for years rather than quarters.

That stance matters because AMD is not just selling chips into a speculative crypto style frenzy, it is competing head to head with Nvidia for sockets in hyperscale data centers that power products like Google Search, Meta’s recommendation engines, and Amazon’s Alexa. When Su dismisses bubble fears, she is effectively saying that these customers are not building capacity for its own sake, but to support revenue generating services that already have billions of users. Her confidence is reinforced by AMD’s own roadmap, which is tightly coupled to Gen AI workloads and is being positioned as a key enabler for businesses that want to deploy large language models more efficiently.

Arvind Krishna’s 8 trillion dollar warning

Arvind Krishna is not arguing that AI is overhyped, but he is putting a very specific number on the risk side of the ledger. The IBM CEO has warned that as AI data center projects stack up around the world, total exposure could rise toward 8 trillion dollars, a figure that captures not just servers and GPUs but also power infrastructure, cooling, and the real estate that houses these facilities. In his view, there is “no way” that such a massive capital outlay will generate acceptable returns if the industry keeps paying current prices for AI chips, electricity, and high end networking gear.

Krishna has framed this as a basic math problem rather than a philosophical debate about technology. If companies borrow heavily or commit shareholder capital to build AI capacity, the revenue from Gen AI services has to cover not only operating costs but also the interest and depreciation on those assets. On the “Decoder” podcast, he stressed that at today’s infrastructure costs, the interest burden alone could make it impossible for many operators to earn back their investment, a point that has been echoed in detailed coverage of his comments on trillions in AI data centers.

Why IBM says the current AI buildout is unsustainable

From Krishna’s perspective, the problem is not that AI will fail, but that the current buildout is front loaded with costs that assume near perfect execution on the revenue side. He has argued that there is “no way” the present trajectory of AI data center expansion can be sustained, pointing to a mismatch between the pace of capital deployment and the slower, more incremental way that enterprises typically adopt new software platforms. In other words, the industry is pouring money into capacity that may sit underutilized for years while customers experiment with pilots and limited rollouts.

Reporting on his comments has highlighted how IBM’s own exposure to AI infrastructure shapes this view. As IBM’s data center footprint and cloud services commitments grow, Krishna has become more vocal about the risk that global AI infrastructure exposure could move toward 8 trillion dollars, a level he sees as dangerous without a sharp drop in unit costs. Coverage of his remarks on unsustainable AI data center trends underscores that this is not a theoretical worry, it is a live issue for a company that sells both AI software and the infrastructure that runs it.

The brutal economics of AI chips and depreciation

One of the most underappreciated pressures in this story is how quickly AI hardware goes obsolete. Traditional servers might be depreciated over seven to ten years, but high end AI accelerators are now being written down over five years or less, and in practice many operators feel compelled to refresh even sooner to stay competitive. That means a data center filled with cutting edge GPUs is not just expensive to build, it is expensive to keep on the books, because the useful life of those chips is shrinking as new generations arrive faster.

Analysts have been explicit about this dynamic, noting that the useful life of AI chips is now often five years or less, which forces companies to write down assets faster and replace them sooner. UBS semiconductor analyst Tim Arcuri has pointed out that this accelerates the financial pressure on operators, since they must recover their investment in a shorter window or risk holding outdated hardware that cannot run the latest models efficiently. Detailed coverage of the great AI buildout has emphasized that this rapid depreciation is a key reason the buildout shows no sign of slowing, even as it raises questions about long term returns, a point captured in analysis of the great AI buildout.

IBM’s 8 trillion dollar question: can AI pay for itself?

Krishna’s 8 trillion dollar figure is not just a headline grabbing number, it is a way of forcing the industry to confront the full stack of costs that come with AI at scale. When I unpack his argument, it comes down to a simple question: can the revenue from Gen AI services, from enterprise copilots to consumer chatbots, realistically cover the combined burden of capital expenditure, rapid chip depreciation, and rising energy prices? If the answer is no, then at least part of the current buildout will end up as stranded or underperforming assets.

IBM’s own analysis has highlighted how the rapid depreciation of AI chips adds another layer of financial pressure on top of already high capital costs. When a company has to write down a multi billion dollar fleet of accelerators in five years or less, the annual hit to the income statement is substantial, and that is before factoring in the cost of power, cooling, and specialized staff. Coverage of IBM’s internal thinking on this point has framed it as a core reason the IBM CEO questions a US 8 trillion AI data centre spend, stressing that the combination of short hardware lifecycles and massive upfront investment makes the economics far more fragile than many investors assume, as detailed in analysis of IBM’s 8tn AI data centre concerns.

Why Lisa Su still sees durable demand, not froth

Against that backdrop of cost anxiety, Lisa Su’s confidence stands out as a bet that demand will more than keep up with the hardware cycle. From her vantage point at AMD, the key fact is that Gen AI is not a single product but a horizontal capability that is being woven into everything from search and advertising to industrial automation and drug discovery. When she says bubble fears are overstated, she is effectively arguing that the addressable market is still expanding faster than the cost base, especially as new chip designs improve performance per watt and per dollar.

Su’s view is also shaped by the way hyperscalers and large enterprises are planning their own AI roadmaps. Many of AMD’s customers are not just buying GPUs for one off experiments, they are building multi year platforms that will support internal tools like code assistants, document summarization, and predictive maintenance across fleets of equipment. That kind of embedded usage tends to be sticky, which is why Su can “emphatically” reject the idea that AI demand will suddenly collapse, a stance that has been widely reported in coverage of how AMD CEO Lisa Su rejects talk of an AI bubble at a high profile event, as detailed in reporting on Su’s AI bubble comments.

Gen AI partnerships as a bridge between optimism and caution

One way the industry is trying to reconcile Su’s optimism with Krishna’s caution is through strategic partnerships that spread risk and accelerate adoption. The collaboration between IBM and AMD around Gen AI is a case in point, pairing IBM’s software and services footprint with AMD’s hardware roadmap to create more efficient and scalable deployment options for customers. By working together, the two companies can offer integrated stacks that promise better performance per dollar, which is exactly the kind of improvement Krishna says is necessary to make multi trillion dollar infrastructure spending pay off.

As businesses continue to explore the potential of Gen AI, alliances like the IBM and AMD partnership are being positioned as a way to lower barriers to entry and make AI deployments more cost effective. IBM can bring its consulting and cloud expertise to help enterprises identify high value use cases, while AMD supplies accelerators tuned for those workloads, creating a virtuous cycle where better utilization and higher productivity help justify the underlying infrastructure. Detailed analysis of the Gen AI deployment benefits of the IBM and AMD partnership has framed this collaboration as a way for IBM to strengthen its role as an attractive provider of AI solutions while also addressing some of the cost concerns that Krishna has raised, a point captured in coverage of Gen AI deployment benefits.

What has to change for 8 trillion dollars to make sense

When I put all of these threads together, the path to making an 8 trillion dollar AI infrastructure footprint rational runs through three levers: cheaper and more efficient hardware, better utilization of existing capacity, and business models that can capture more of the value Gen AI creates. Krishna’s warning is essentially that without progress on all three, the industry is building a cost base that its current revenue streams cannot support. Su’s counterpoint is that the demand side is evolving so quickly that those improvements are not just possible but likely, especially as competition among chipmakers and cloud providers intensifies.

IBM’s own messaging reflects this duality. On one hand, the IBM CEO has repeatedly said there is “no way” that trillions in AI data center spending will pay off at today’s infrastructure costs, a view that has been reinforced in coverage of his comments on AI data center costs. On the other hand, IBM is investing heavily in Gen AI platforms and partnerships that aim to improve those economics, betting that if it can help customers turn AI pilots into scaled deployments, the revenue side of the equation will start to look more like the cloud and less like a speculative land grab.

The AI buildout is real, but the bill is coming due

For now, the great AI buildout shows no sign of slowing, and both AMD and IBM are deeply embedded in that momentum. Hyperscalers are racing to add capacity, enterprises are experimenting with copilots and domain specific models, and chipmakers are rolling out new architectures at a pace that would have seemed impossible a decade ago. From a distance, it can look like a classic bubble, with capital chasing the latest buzzword, but the underlying demand for automation, personalization, and insight is grounded in real business needs that are unlikely to vanish.

The real test will come over the next several years as depreciation schedules, interest payments, and power bills collide with the revenue curves of Gen AI products. If Lisa Su is right, the market will absorb the new capacity, AI will become as ubiquitous as cloud storage, and the 8 trillion dollar figure will look less like a warning and more like the foundation of a new computing era. If Arvind Krishna’s caution proves prescient, the industry will be forced into a painful reset, writing down assets and rethinking how much infrastructure it truly needs. Either way, the decisions being made today in boardrooms at AMD, IBM, and their biggest customers will shape not just the future of AI, but the financial architecture of the digital economy itself.

More from MorningOverview