Researchers at the California Institute of Technology and the startup Oratomic have published a theoretical framework showing that a practical quantum computer capable of breaking modern encryption could be built with as few as 10,000 to 20,000 physical qubits. That figure represents a dramatic reduction from earlier estimates that placed the threshold at roughly one million qubits or higher, and it suggests the timeline for useful quantum machines may be considerably shorter than the field has assumed.
From One Million to 10,000 Qubits
The conventional math behind fault-tolerant quantum computing has long been discouraging. A standard estimate holds that about 1,000 logical qubits are needed to run algorithms like Shor’s, which can factor the large integers that underpin RSA encryption. Because each logical qubit typically requires around 1,000 noisy physical qubits for error correction, the total hardware bill comes to roughly one million physical qubits. A widely cited 2019 analysis by Gidney and Ekera calculated that factoring a 2048-bit RSA integer would demand tens of millions of noisy qubits running for hours under surface-code and nearest-neighbor gate assumptions.
The new Caltech and Oratomic paper, posted to the arXiv preprint server, argues that the ratio of physical to logical qubits can be slashed by combining high-rate quantum error-correcting codes, optimized logical instruction sets, and careful circuit design. Under this scheme, Shor’s algorithm can execute cryptographically relevant computations with as few as 10,000 reconfigurable atomic qubits. The key insight is that neutral-atom platforms, which trap individual atoms using focused laser beams called optical tweezers, offer the reconfigurability and connectivity that older qubit architectures lack. That flexibility lets the system use error-correcting codes with much better encoding rates than the surface codes assumed in earlier resource estimates.
Hardware That Already Scales
This is not purely a paper exercise. The theoretical work builds on real experimental progress from the same Caltech group. In September 2025, the team set a record by assembling a tweezer array with 6,100 cesium-atom qubits while maintaining coherence and high-fidelity operations. That array achieved a coherence time of 12.6 seconds, along with record imaging survival and fidelity metrics. Those numbers matter because long coherence times give the processor more room to perform complex gate sequences before quantum information degrades.
The 6,100-qubit demonstration is significant for another reason: it shows that neutral-atom arrays can scale past the thousands while preserving the quality of individual qubit operations. A separate, earlier experiment demonstrated a programmable encoded-logical-qubit processor operating with up to roughly 280 physical qubits, proving that error-protected logical operations are already possible on reconfigurable atom platforms. More recently, Caltech researchers reported high-fidelity multiqubit control in large-scale neutral-atom systems, further strengthening the case that the underlying hardware can be extended without catastrophic losses in performance. Taken together, these results suggest the hardware side of the 10,000-qubit target is not a distant aspiration but an engineering challenge with a visible path.
Why the Gap Closed So Fast
The difference between one million qubits and 10,000 is not just a matter of better hardware. It reflects a shift in how researchers think about the entire software and architecture stack. Earlier estimates, including the Gidney and Ekera benchmark, assumed surface codes on fixed, nearest-neighbor qubit grids. Surface codes are well understood but wasteful; they dedicate large numbers of physical qubits to protecting each logical qubit because their encoding rate is low.
Neutral-atom arrays change the equation because atoms can be physically rearranged during computation. The Caltech team’s approach uses a zone-based quantum computing plan in which atoms are shuttled between different functional regions of the array. This reconfigurability enables the use of higher-rate error-correcting codes that pack more logical information into fewer physical qubits. Combined with a tailored logical instruction set and circuit-level optimizations, the overhead drops by roughly two orders of magnitude compared to surface-code baselines. Craig Gidney’s own updated 2025 analysis, which refined his earlier resource estimates for factoring RSA-2048, had already pushed the requirement below one million noisy qubits under 0.1% gate-error assumptions, but the Caltech framework goes substantially further by exploiting the unique strengths of atom arrays.
What This Means for Encryption and National Strategy
The practical stakes are hard to overstate. RSA-2048 encryption protects everything from banking transactions to classified government communications. If a quantum computer can factor those keys, the entire public-key infrastructure that secures the internet becomes vulnerable. The standard assumption in cybersecurity planning has been that such a machine is decades away, precisely because building a million-qubit processor seemed so far off. A 10,000-qubit threshold compresses that timeline sharply.
The U.S. Government Accountability Office has flagged quantum computing as a strategic priority, calling for updates to the national quantum strategy to maintain American competitiveness in the field, as outlined in a recent federal report. The Caltech result adds urgency to that recommendation. If practical quantum machines require hardware that already exists at roughly 60% of the needed scale, the transition to post-quantum cryptographic standards becomes a near-term operational concern rather than a long-range planning exercise.
The Gap Between Theory and a Working Machine
A healthy dose of caution is still warranted. The Caltech and Oratomic framework assumes error rates, gate fidelities, and control systems that, while plausible, have not yet been demonstrated simultaneously in a single device at the 10,000-qubit scale. Neutral-atom platforms must show that they can load, rearrange, and entangle that many atoms with consistently low error, all while running the intricate sequences required by Shor’s algorithm.
Engineering challenges loom at every layer. Laser systems must be stabilized to maintain uniform trapping and control across thousands of sites. Classical control electronics and software must orchestrate vast numbers of parallel operations with nanosecond timing precision. Error-correcting codes with high encoding rates are more complex to decode, demanding powerful classical processors that can keep up with the quantum hardware in real time.
There is also the question of reliability over long computations. Even with improved codes and optimized circuits, breaking RSA-2048 will require billions of logical operations. Any residual bias in error patterns, crosstalk between zones, or drift in calibration over hours of runtime could erode the promised advantages. Demonstrating stable, large-scale operation will likely require several generations of experimental systems, each ironing out new failure modes that only appear at higher qubit counts.
On the software side, the theoretical resource counts depend on carefully crafted versions of Shor’s algorithm that minimize depth and exploit specific hardware capabilities, such as flexible connectivity and mid-circuit measurement. Translating those abstract circuits into executable control sequences on a real neutral-atom device is nontrivial. Compiler stacks must be able to map logical operations onto physical layouts that change dynamically as atoms are shuffled between zones.
Still, the direction of travel is clear. A decade ago, discussions of quantum computers breaking RSA were couched in caveats about the need for millions or even billions of qubits, making the threat feel remote. The combination of scalable neutral-atom hardware, demonstrated logical qubits, and a resource framework that points to the 10,000 to 20,000 range has moved the conversation into a different phase. The bottleneck is no longer an apparently impossible hardware scale, but a demanding, though finite, set of engineering milestones.
For policymakers and security professionals, the prudent response is not panic but acceleration. Post-quantum cryptographic standards exist, and migration plans can be executed over years rather than decades. The new Caltech results do not guarantee that a code-breaking quantum computer will appear on a specific date, but they narrow the uncertainty enough that inaction carries growing risk. As the experimental record of large, coherent atom arrays and error-protected operations continues to improve, the theoretical line between 10,000 qubits on paper and 10,000 qubits in a lab will only become thinner.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.