A team of physicists has demonstrated two quantum error correction codes on a 32-qubit superconducting processor that require far fewer physical qubits than the standard approach, offering a concrete path toward shrinking the massive hardware overhead that has long blocked practical quantum computing. The experiment, published in Nature Physics, used a class of codes called quantum low-density parity-check, or qLDPC, which encode logical information more efficiently than the widely used surface code. The result lands amid a flurry of related advances, from new code families tailored to simple chip layouts to architectural blueprints that project cryptography-breaking machines built with an order of magnitude fewer qubits than previously assumed.
What the Kunlun Experiment Proved
The peer-reviewed paper describes two low-overhead qLDPC codes, including bivariate bicycle constructions tested on a 32-qubit superconducting processor named Kunlun. That chip features long-range couplers, hardware connections that let physically distant qubits interact without routing signals through a chain of neighbors. The design choice matters because qLDPC codes demand connectivity patterns that conventional nearest-neighbor chip layouts cannot easily support.
The authors report syndrome extraction circuits tailored to Kunlun’s connectivity, along with measurements of logical error rates across different code distances. They show that, for comparable protection, their qLDPC implementations use significantly fewer qubits than a surface-code layout would require on the same hardware. An expanded manuscript provides additional derivations, calibration data, and circuit-level simulations that support the overhead estimates and clarify how the long-range couplers are scheduled to avoid crosstalk.
Crucially, the work goes beyond a one-off demonstration. The researchers outline how the same design principles could scale to larger processors, arguing that the connectivity pattern, rather than the specific chip, is the key ingredient. That gives other groups a template: if they can engineer similar non-local couplings, they can attempt comparable qLDPC implementations without reinventing the entire control stack.
Why Surface Codes Hit a Wall
To understand what makes qLDPC codes attractive, it helps to contrast them with the dominant alternative. The surface code has been the default strategy for quantum error correction because it works on planar chips where each qubit talks only to its nearest neighbors. Google Quantum AI recently demonstrated this approach on its Willow processors, achieving below-threshold logical memories at distance 5 and 7 with real-time decoding. That experiment marked a major milestone: it showed that increasing the code distance actually reduced logical error rates, a defining feature of fault tolerance.
The same result, however, underscored the cost. Even a modest-distance surface code consumed dozens of physical qubits per logical qubit, and pushing to the distances needed for large-scale algorithms would require millions of qubits. The core limitation is encoding rate: surface codes scale roughly linearly in the number of physical qubits per unit of protection, so stronger suppression quickly becomes hardware-prohibitive.
qLDPC codes attack that bottleneck by distributing parity checks over a sparse graph of qubit interactions. Each qubit participates in only a few checks, but the global structure allows many logical qubits to share the same physical fabric. In principle, this yields a constant-rate code where the number of logical qubits grows proportionally with the total number of physical qubits, dramatically improving hardware efficiency. The price is complexity: implementing these sparse, global checks demands connectivity that standard square-grid chips do not naturally provide. Kunlun’s long-range couplers are one solution, but they are not the only route under active investigation.
New Code Families Designed for Real Chips
Because specialized couplers are difficult to engineer and scale, other researchers are trying to bring qLDPC benefits to more conventional layouts. One proposal introduces directional code families that are compatible with square and hexagonal grids, the geometries already favored by many superconducting and trapped-ion platforms. In simulations, these codes achieve target logical error rates with substantially fewer physical qubits than rotated planar surface codes, while respecting nearest-neighbor constraints along specific directions.
The key idea is to orient the parity checks so that each stabilizer involves qubits along preferred axes of the grid, reducing routing overhead and simplifying the control electronics. This structure preserves much of the low-density advantage of qLDPC while staying within the wiring envelope that commercial foundries and ion-trap engineers already know how to fabricate. If validated experimentally, such codes could offer a drop-in replacement for surface codes on existing chips, trading some architectural simplicity for large savings in qubit count.
Another line of work focuses on the decoder rather than the code itself. Traditional decoders treat measurement outcomes as hard bits, each check either passes or fails. A methods study argues that incorporating soft information from analog readout signals into the decoding algorithm can significantly improve performance. Instead of discarding amplitude and phase details, the decoder uses them to assign probabilities to different error patterns, effectively making more informed guesses about what went wrong.
By exploiting this richer data, the soft-information decoder can reach a given logical error rate at lower code distances or tolerate higher physical error rates at the same distance. In hardware terms, that means fewer qubits or looser device specifications for equivalent protection. Combined with qLDPC codes, such decoding strategies could compound the overhead reductions, especially in regimes where measurement noise dominates gate errors.
From Lab Demos to Architecture Blueprints
The most dramatic implications of these techniques appear in system-level resource estimates. A recent architecture study introduces what the authors call the Pinnacle layout, arguing that a combination of qLDPC codes, optimized magic-state factories, and aggressive scheduling could bring the physical qubit cost of factoring a 2048-bit RSA modulus down to roughly 100,000 qubits. The corresponding preprint claims that this configuration, detailed in an RSA resource analysis, represents about an order-of-magnitude reduction compared with earlier surface-code-based projections.
Those numbers are not yet peer-reviewed and depend on optimistic but not obviously impossible assumptions about gate fidelities and parallelism. Even so, they align qualitatively with independent estimates from other groups that have examined qLDPC-based fault-tolerant architectures and arrived at similar sub-100,000-qubit figures for algorithms involving thousands of logical qubits. The convergence of these studies suggests that the community’s expectations for the hardware scale required to threaten current public-key cryptography may need revision.
If such projections hold up under scrutiny, the timeline for cryptographically relevant quantum computers could compress. Surface-code-only designs that require millions of qubits are difficult to reconcile with foreseeable fabrication yields, cryogenic infrastructure, and control electronics. Designs in the hundred-thousand-qubit range, by contrast, fall within the extrapolated roadmaps of several leading hardware efforts, assuming continued improvements in error rates and connectivity. That shift would have direct consequences for cryptographic migration planning and for policymakers weighing the urgency of post-quantum standards.
Operational Fault Tolerance Beyond Memory
Reliable storage of quantum information is only part of the story. Useful computation requires performing logical gates on encoded qubits while maintaining protection, a challenge that introduces new sources of error from multi-qubit interactions and control crosstalk. Recent experiments have begun to address this by demonstrating time-dependent reconfiguration of stabilizer circuits, effectively reshaping the code in real time to support different logical operations while minimizing correlated faults.
In parallel, other teams are exploring how to integrate qLDPC-style encodings into full computational stacks, from compilation through syndrome extraction to decoding. These efforts must contend with practical constraints such as limited measurement bandwidth, finite classical processing speed, and thermal budgets in cryogenic environments. Early results indicate that careful scheduling of checks and gates, along with hierarchical decoding schemes that exploit code structure, can keep these overheads manageable even as code distances grow.
Access to the underlying experimental data and analysis has also become a point of emphasis. Some readers encounter login-gated supplementary material when following journal links, but many authors now mirror key figures and derivations on preprint servers, helping ensure that the broader community can validate and extend their work.
The Road Ahead for Practical Quantum Machines
Taken together, the Kunlun qLDPC demonstration, new grid-compatible code families, soft-information decoders, and aggressive architectural studies sketch a path away from the brute-force surface-code paradigm. Instead of accepting million-qubit requirements as inevitable, researchers are attacking overhead from multiple angles: code design, hardware connectivity, measurement processing, and system architecture.
Significant challenges remain. qLDPC codes are more complex to implement and tune than planar surface codes, and their performance in large, noisy devices is still largely uncharted territory. Long-range couplers introduce engineering risks, while advanced decoders demand fast, low-power classical processors operating close to the quantum hardware. Nonetheless, the recent wave of results has shifted the conversation from whether overhead can be reduced to how far it can realistically be pushed.
If future experiments confirm that constant-rate or near-constant-rate qLDPC codes can be made robust on scalable hardware, the qubit counts associated with fault-tolerant quantum computing could drop by an order of magnitude or more. That would not make quantum computers easy to build, but it would move them from the realm of speculative mega-projects into the domain of ambitious yet plausible engineering. For now, the Kunlun processor and its successors stand as early proof that the long-assumed overhead wall is not as immovable as it once seemed.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.