
Quantum computing has long lived in the realm of lab demos and bold PowerPoint slides, but two of the industry’s biggest players now say the first truly useful machines are less than five years away. Google and IBM are both publicly targeting 2029 for quantum systems that can run practical, fault-tolerant workloads, a timeline that would move the field from experimental curiosity to a new layer of the computing stack. If they are right, the second half of this decade will be defined by a race to turn fragile qubits into industrial infrastructure.
Those promises are not just marketing slogans, they are backed by detailed roadmaps, new facilities, and specific performance targets that spell out what “practical” and “fault tolerant” actually mean. I see a clear pattern emerging: Google is betting on a massive scale-up of qubits and aggressive error correction, while IBM is anchoring its plan in a carefully staged architecture that culminates in a large scale, fault-tolerant platform for clients. The result is a rare moment in tech when two incumbents are publicly staking their reputations on a concrete year.
Why 2029 became the magic year for quantum computing
When two separate companies converge on the same deadline, it usually signals more than coincidence. By setting 2029 as the year when quantum computers move from prototypes to practical tools, Google and IBM are effectively telling customers, investors, and regulators that the underlying physics and engineering are finally predictable enough to schedule. I read that as a declaration that the field has left its exploratory phase and entered an execution phase, where the main question is not “if” but “how fast” they can deliver.
Google has been explicit that it wants a “practical” quantum computer by the end of this decade, one powerful enough to tackle real-world problems and, in its own framing, to “vie with quantum computing startups” that are chasing the same goal at smaller scale. The company has tied that ambition to a new center dedicated to quantum hardware and software, and to a design that will ultimately require on the order of a million qubits to be truly transformative, a figure it has linked to its plan to build a practical quantum computer by 2029. IBM, for its part, has framed 2029 as the moment it will deliver the first fault-tolerant quantum computer that is available to clients and capable of running 100 million gates, a milestone it has embedded directly into its quantum roadmap for 2029.
Google’s 2029 bet: from lab demos to a “practical” machine
Google’s strategy starts from a simple premise: a useful quantum computer must be big, stable, and integrated into a broader computing ecosystem, not just a physics experiment in a cryostat. The company has said it is aiming to build a useful quantum computer by 2029, one that can handle commercially relevant workloads rather than isolated benchmarks. That ambition is tightly coupled to its broader AI agenda, since Google expects quantum hardware to accelerate certain machine learning tasks and optimization problems that are currently constrained by classical resources.
To get there, Google has begun building a dedicated Quantum AI center that brings together chip fabrication, control electronics, and algorithm research under one roof, a move it describes as essential to scaling to the million or so qubits it believes will be necessary for broadly useful applications. Executives have tied this facility to a plan to build a practical quantum computer by 2029 at a new center, explicitly positioning the effort as a way to vie with quantum computing startups that are already selling early access to their own machines. In parallel, the company has said it is aiming to build a useful quantum computer by the end of the decade and has linked that goal to expected benefits for AI development, even as some researchers have cast doubt on earlier claims of “quantum supremacy,” a tension reflected in its stated aim to build a useful quantum computer by 2029.
Inside Google’s Quantum AI Campus and error-free ambition
Hardware alone will not make quantum computing practical, which is why Google has wrapped its 2029 target in a narrative about error correction and system integration. The company has described its Quantum AI Campus as the physical hub where it will refine qubit designs, control systems, and software stacks to deliver what it calls a working, error-free quantum computer by the end of the decade. I read that phrase “error-free” not as a literal promise of perfection, but as shorthand for a system where logical qubits are stable enough that errors can be corrected faster than they accumulate.
To deliver on that promise, Google has said the Quantum AI Campus houses the company’s quantum hardware, cryogenic infrastructure, and research teams focused on simulating molecules and other complex systems that are out of reach for classical machines. The company has explicitly linked this facility to its goal of creating a working, error-free quantum computer by 2029, describing the campus as the place where it will learn to “understand molecules better” and refine the algorithms that will run on such a machine, a vision it has attached to its plan to create a working, error-free quantum computer by 2029. That framing underscores how tightly Google is coupling its hardware roadmap to specific scientific use cases rather than generic performance metrics.
Willow and the race to tame quantum errors
The biggest technical obstacle between today’s noisy devices and a 2029 workhorse is error correction, and Google has started to show its hand there as well. The company has highlighted a chip called Willow as a state-of-the-art platform for testing quantum error correction at scale, and it has claimed that this device demonstrates a key property: as the number of logical qubits increases, the overall error rate drops rather than rises. That inversion is the core requirement for building large logical qubit arrays that can survive long computations.
In reporting on Willow, Google has emphasized that “every time we increased our logical qubits” the system’s performance improved, a pattern it has framed as a quantum error correction milestone tied directly to the Google Claims Quantum Error Correction Milestone With Willow chip. On its own blog, the company has gone further, describing Willow as a state-of-the-art quantum chip that reduces errors exponentially as it scales up and performs the largest quantum error correction experiment it has ever run, a result it presents as evidence that large scale, fault-tolerant quantum computers can be built using this approach, a claim it has attached to its description of Willow. Those details matter because they show Google is not just promising error-free operation in the abstract, it is publishing concrete scaling behavior that supports its 2029 timeline.
Google’s roadmap: from NISQ to large-scale quantum
Beyond individual chips, Google has laid out a structured roadmap that explains how it expects to move from today’s noisy intermediate-scale quantum (NISQ) devices to the large-scale machines it wants by 2029. The company has framed its focus as unlocking the full potential of quantum computing by developing a large-scale computer that can move beyond the NISQ era, with intermediate milestones that include better qubits, improved control electronics, and more sophisticated algorithms. I see that roadmap as a way to de-risk the 2029 promise by breaking it into smaller, verifiable steps.
In that roadmap, Google has described a progression from current systems to error-corrected logical qubits and eventually to a machine with enough capacity to tackle chemistry, optimization, and AI workloads that are currently infeasible. The company has summarized this journey as “Our quantum computing roadmap,” explicitly stating that its focus is to unlock the full potential of quantum computing by developing a large-scale computer that can move beyond the noisy intermediate-scale quantum era, language it has attached to its description of Our roadmap. That framing reinforces the idea that the 2029 target is not a single leap but the culmination of a staged engineering program.
IBM’s 2029 pledge: fault tolerance as the defining feature
While Google talks about practicality and usefulness, IBM has chosen a different anchor word for its 2029 promise: fault tolerance. In IBM’s framing, the defining feature of the next generation of quantum computers is not just size or speed, but the ability to run long, complex circuits without being derailed by noise. I read that emphasis as a deliberate attempt to set expectations about what “real” quantum advantage will look like, and to differentiate IBM’s roadmap from earlier hype cycles.
IBM has publicly said it will achieve fault-tolerant quantum computing by 2029, a message it has reinforced in multiple formats, including a short presentation in which it describes quantum computers as widely expected to solve problems that cannot be tackled with classical computers and asserts that it believes it has a clear path to that goal, a claim it has attached to its statement that it will achieve fault-tolerant quantum computing by 2029. On its formal roadmap, IBM has gone further, stating that in 2029 it will “Deliver the” first fault-tolerant quantum computer, and that this system will be available to clients and capable of running 100 million gates, a level of performance it has embedded directly into its Deliver the milestone. That combination of public messaging and detailed technical targets signals a high degree of internal confidence.
Starling and IBM’s hardware blueprint
IBM’s hardware story for 2029 centers on a system code-named Starling, which it has described as a key building block on the road to fault tolerance. Rather than chasing raw qubit counts alone, IBM is focusing on logical qubits, the error-corrected units that will actually run algorithms. In its description of Starling, the company has said the system will include 200 logical qubits and support up to 100 m quantum operations, figures that give a concrete sense of the scale it is targeting for early fault-tolerant workloads.
Those numbers appear in IBM’s explanation of how Starling fits into its broader architecture, which includes new error-correcting codes such as qLDPC and new chip connectors designed to support high-fidelity operations across larger devices. The company has presented Starling as part of a 2029 target for a practical quantum computer, stating that IBM (IBM, Financials) has set a 2029 target for a practical quantum computer and that the Starling system will include 200 logical qubits and support up to 100 m quantum operations, details it has tied to its description of Starling. By putting specific logical qubit counts and operation budgets on the table, IBM is inviting the community to judge its progress against measurable benchmarks rather than vague promises.
The IBM Poughkeepsie Lab and the path to large scale FTQC
IBM is also leaning heavily on its institutional history and manufacturing footprint as part of its quantum pitch. The company has framed its legacy in Poughkeepsie, New York, as a foundation for building the next generation of quantum systems, drawing a line from its role in mainframe computing to its ambitions in quantum. I see that as more than nostalgia, it is a way of signaling that IBM intends to treat quantum as a long-lived product line, not a research project that might be spun off or abandoned.
In its description of how it will build large scale, fault-tolerant quantum computers, IBM has highlighted “The IBM” legacy in Poughkeepsie and described how The IBM Poughkeepsie Lab quickly became an important site of computer production, a history it now wants to extend to quantum hardware. The company has laid out a clear path to fault-tolerant quantum computing that includes new fabrication facilities, modular architectures, and an emphasis on scaling up manufacturing capacity at its facility in Poughkeepsie, New York, a strategy it has detailed in its explanation of how The IBM will build large scale FTQC. That focus on place and process underscores IBM’s belief that quantum will eventually be built and shipped like other high-end computing systems.
Algorithms, software, and IBM’s route to advantage
Hardware roadmaps tend to grab the headlines, but IBM has been equally vocal about the software and algorithmic work it believes is necessary to make fault-tolerant machines useful. The company has described a “Path” that includes new quantum processors, software stacks, and algorithm research aimed at achieving both quantum advantage and full fault tolerance. I read that dual focus as an acknowledgment that raw qubit counts will not matter if developers cannot express real-world problems in forms that run efficiently on quantum hardware.
IBM has highlighted recent progress on new quantum processors and software, as well as Algorithm research that improves the efficiency of quantum circuits and error correction. In its description of this work, the company has said that in a parallel path it is developing high-efficiency quantum error correction and that it has delivered new quantum processors, software, and Algorithm Breakthroughs on the Path to advantage and fault tolerance, language it has attached to its announcement that IBM Delivers New Quantum Processors, Software, Algorithm Breakthroughs, Path. That emphasis on algorithms is crucial, because it suggests IBM is not just building a machine and hoping useful workloads appear, it is actively shaping the software ecosystem that will run on its 2029 platform.
“Large scale, fault-tolerant” and what it means for customers
For enterprise buyers, the most important phrase in IBM’s messaging may be “large scale, fault-tolerant,” which the company has used to describe the class of quantum computer it intends to build by 2029. In practical terms, “large scale” signals a system with enough logical qubits and gate depth to handle industrial workloads, while “fault-tolerant” promises that those workloads can run reliably without constant manual intervention. I see that combination as IBM’s way of telling CIOs that quantum will eventually look and feel like other managed infrastructure, with service-level expectations rather than experimental caveats.
IBM has been explicit that it plans to build a “large scale, fault-tolerant” quantum computer by 2029 and that it has unveiled a path to build what it describes as the world’s largest quantum computer, a message it has tied to its statement that IBM (NYSE) will build a large scale, fault-tolerant quantum computer by 2029. That framing matters because it sets expectations not just about the machine’s capabilities, but about its availability as a service that enterprises can integrate into their workflows. When combined with the Starling system’s 200 logical qubits and 100 m quantum operations and the roadmap’s 100 million gate target, it paints a picture of a platform designed from the outset for sustained, production-grade use.
What a 2029 quantum era could actually look like
Taken together, Google’s and IBM’s 2029 commitments suggest that the second half of this decade will be a transition period in which quantum computing moves from experimental access programs to something closer to cloud infrastructure. I expect early use cases to cluster around chemistry, materials science, and optimization, where both companies have already pointed to potential breakthroughs. If Google succeeds in scaling Willow-style error correction and delivering a practical machine that integrates tightly with its AI stack, it could offer developers a new class of accelerators alongside GPUs and TPUs.
On the IBM side, a large scale, fault-tolerant platform anchored in Poughkeepsie and built around systems like Starling would likely be delivered as a managed service, with clients accessing 200 logical qubits and 100 m quantum operations through familiar APIs and software tools. In that world, the phrase “Deliver the first fault-tolerant quantum computer” would no longer be a roadmap bullet, but a description of a live system that enterprises can call from their existing workflows. Whether both companies hit their 2029 targets exactly or slip by a year or two, the level of detail in their plans suggests that the era of hand-waving about quantum is ending. The next phase will be judged on delivered qubits, corrected errors, and real workloads, not just on promises.
More from MorningOverview