Quantum computers may slam into hard architectural walls long before they can crack the encryption protecting online banking, government secrets, and critical infrastructure. Fresh theoretical research shows that the physical layout of quantum chips, specifically the constraints of two-dimensional connectivity, could inflate the overhead for advanced error-correcting codes so dramatically that fault-tolerant machines remain out of reach for the attacks that worry cryptographers most. The finding complicates a popular narrative that rapidly shrinking qubit estimates will soon put RSA-2048 encryption within striking distance.
The Shrinking Qubit Estimates and Why They Mislead
The resource cost of breaking RSA-2048 encryption with a quantum computer has been a moving target for years. A widely cited 2019 paper estimated that roughly 20 million noisy qubits would be needed to factor a 2048-bit RSA integer in about eight hours, assuming a surface-code-protected machine with aggressive parallelization; those early resource estimates quickly became a benchmark and a talking point in both industry slide decks and policy briefings. Researchers have been chipping away at that figure ever since, using better compilation techniques, tighter circuit constructions for modular arithmetic, and more optimistic assumptions about gate fidelities.
More recent work has pushed the estimate down sharply. According to a separate preprint, optimized implementations of Shor’s algorithm could factor RSA-2048 with less than a million noisy qubits by streamlining the arithmetic and assuming fast, high-quality error correction. And a February 2026 proposal called the Pinnacle Architecture goes further still, claiming that quantum LDPC (low-density parity-check) codes could reduce the requirement to 100,000 physical qubits for a full-scale attack. Each of these estimates, however, rests on assumptions about chip connectivity, gate error rates, and decoder performance that real hardware has not yet achieved. The gap between the 20‑million‑qubit baseline and the 100,000‑qubit proposal is not simply a story of progress; it is a story of increasingly aggressive theoretical assumptions meeting increasingly stubborn engineering constraints.
That distinction matters for risk assessment. Cryptographers and security agencies care less about the absolute number of qubits in a toy model and more about whether those qubits can be laid out, wired, and controlled in a manufacturable device. A design that assumes every qubit can talk to every other qubit with equal ease is very different from one that respects the messy realities of chip fabrication, wiring congestion, and cryogenic packaging. As the estimates shrink, the underlying architectural assumptions have quietly grown more heroic.
2D Chip Layouts Create a Ceiling
The central tension is geometric. Most quantum processors built today arrange their qubits on flat, two-dimensional surfaces where each qubit connects only to its nearest neighbors. That layout works well enough for small demonstrations, but it creates severe bottlenecks when researchers try to implement the high-rate quantum LDPC codes that drive the most optimistic qubit estimates. A 2024 analysis showed that when codes with long-range interactions are forced onto planar layouts with only local couplings, geometric constraints can multiply the overhead needed to realize logical gates and syndrome extraction. In plain terms, the codes that promise the biggest efficiency gains on paper demand wiring patterns that flat chips cannot deliver without enormous additional cost.
This is not a new suspicion, but it now has formal backing. Earlier theoretical work demonstrated that restricted connectivity directly limits the parameters and capabilities of quantum error-correcting codes, putting trade-offs between code distance, rate, and locality on a more rigorous footing. Together, these findings suggest that proposals like the Pinnacle Architecture, which lean heavily on quantum LDPC codes with sparse but nonlocal checks, would need to move beyond 2D layouts to deliver on their promises. That likely means three-dimensional integration, modular architectures linked by photonic interconnects, or other forms of non-local coupling that remain in early conceptual stages and carry their own fabrication and scaling challenges.
The upshot is that there may be a practical ceiling on how much efficiency can be wrung out of error correction without a corresponding revolution in hardware architecture. Shrinking qubit counts in theoretical papers do not automatically translate to shrink-wrapped quantum chips that can be mass-produced and deployed in data centers or adversarial laboratories.
Error Correction Progress Is Real but Small-Scale
Skeptics of the “limits” argument can point to genuine experimental milestones. A paper from August 2024 presented two surface code memories operating below the surface code threshold, one at distance‑7 and another at distance‑5, with repeated rounds of error detection demonstrating that logical qubits could retain information longer than the underlying physical qubits. Demonstrating this kind of sub-threshold operation is a prerequisite for scaling up fault-tolerant quantum computation, and reaching it in the lab is a legitimate step forward that validates decades of theoretical work on stabilizer codes.
But surface codes are among the least efficient error-correcting schemes in terms of qubit overhead. They work precisely because they are compatible with 2D layouts and nearest-neighbor connectivity, mapping neatly onto the planar chips that superconducting and trapped-ion platforms can build today. The codes that promise dramatic qubit reductions, quantum LDPC codes in particular, require richer connectivity that current hardware does not support at scale. So the experimental wins and the theoretical optimism are, for now, pulling in opposite directions: the codes that work on real chips are expensive in qubit count, and the codes that are cheap on paper do not yet fit on real chips.
This tension mirrors a familiar pattern from classical computing. High-performance error-correcting codes for deep-space communication or dense storage media often assume idealized channels and decoding hardware, but real-world systems adopt more modest schemes that balance performance with implementation cost. In quantum computing, that compromise may be even starker, because every extra qubit and every extra coupling line must operate at cryogenic temperatures with exquisite control.
Why Governments Are Not Waiting
Even if quantum computers face hard architectural ceilings, the policy world has decided not to gamble on that outcome. The National Institute of Standards and Technology released its first three finalized post-quantum encryption standards in August 2024, selecting new public-key schemes that are intended to withstand both classical and quantum attacks; the agency’s announcement of these initial post-quantum selections emphasized the need for timely migration. Then in March 2025, NIST selected HQC as a fifth algorithm for post-quantum encryption, explicitly broadening the diversity of mathematical assumptions in its portfolio; by adding HQC, the agency signaled that algorithmic redundancy is a feature, not a flaw, of the emerging standard set.
The urgency extends beyond standards bodies. CISA, NIST, and NSA have jointly recommended that organizations begin preparing now for post-quantum cryptography, with particular attention to “harvest now, decrypt later” risks; in their joint guidance, the agencies urge network defenders to inventory cryptographic assets, prioritize long-lived secrets, and develop migration roadmaps in line with federal preparation advice. That threat model assumes adversaries are already collecting encrypted data today with the intention of decrypting it once a sufficiently powerful quantum computer exists, whether that takes five years or twenty-five. For data with long confidentiality requirements, such as health records, intelligence, and financial instruments, the window for safe migration is already closing regardless of the exact trajectory of quantum hardware.
These efforts sit alongside broader security and standards work at NIST. While post-quantum schemes grab headlines, the same agency maintains chemical reference data through its online thermochemical tables and curates software vulnerability records via the public-facing vulnerability database, underscoring how cryptographic transitions are just one part of a much larger ecosystem of technical governance. For organizations trying to prioritize scarce security resources, that context matters: post-quantum migration must compete with patching, configuration management, and basic hygiene, even as national guidance frames it as a strategic imperative.
Architectural Walls and Strategic Choices
None of this means that quantum attacks on today’s public-key infrastructure are impossible. It does mean that the path from theoretical qubit counts to practical cryptanalytic machines is strewn with architectural obstacles that cannot be wished away by better algorithms alone. Two-dimensional chip layouts, limited connectivity, and the overheads of realistic error correction all conspire to make the most aggressive projections look fragile. At the same time, the stakes of being wrong are high enough that governments and critical industries are treating quantum risk as a certainty to be managed rather than a possibility to be debated.
The next decade of quantum computing will likely be defined as much by engineering trade-offs as by algorithmic breakthroughs. If new architectures can deliver richer connectivity (through 3D integration, modular networking, or yet-uninvented approaches), then today’s LDPC-based proposals may come closer to reality, and the lower qubit estimates will start to matter more. If not, surface codes and other locality-friendly schemes may dominate, keeping truly large-scale attacks on RSA-2048 at bay longer than the most alarmist timelines suggest.
For now, the prudent stance is a kind of strategic ambivalence: assume that quantum computers capable of breaking widely deployed public-key schemes will eventually arrive, but recognize that hard architectural walls could delay that moment well beyond the most optimistic forecasts. That combination of technical skepticism and policy urgency is shaping how standards bodies, security agencies, and industry groups are planning the long, messy transition to a post-quantum world.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.