Princeton University researchers have built a quantum chip on a high-resistivity silicon substrate that reports millisecond-scale coherence, a result that could influence how the quantum computing sector approaches scalable processor design. The team, led by Andrew Houck, published its findings in a Nature paper dated November 5, 2025, according to Princeton’s announcement. The work centers on a tantalum-on-high-resistivity-silicon materials platform that, as reported in the Nature paper and related materials, achieved a best-qubit lifetime of 1.68 milliseconds and single-qubit gate fidelity of 99.994%, pointing toward a potentially more hardware-efficient route to fault-tolerant quantum computing.
The device is part of a broader effort to push beyond incremental improvements in qubit performance and instead re-engineer the underlying materials stack. By focusing on the microscopic defects and interfaces that quietly sap quantum information, the Princeton team has shown that long-lived, high-fidelity qubits need not be exotic or incompatible with established chip fabrication. Their results suggest that quantum processors capable of running deep, error-corrected algorithms may be built on platforms closely aligned with mainstream semiconductor manufacturing, rather than on bespoke materials that are difficult to scale.
Why Tantalum on Silicon Changes the Equation
Most superconducting qubits lose their quantum state in microseconds, a window so brief that error-correction overhead consumes the bulk of a processor’s capacity. The Princeton group attacked this problem at the materials level. By depositing a tantalum base layer on a high-resistivity silicon substrate, the team suppressed the dielectric losses that typically degrade qubit performance. The approach drew on earlier research that disentangled surface and bulk loss channels in tantalum circuits, isolating the two-level-system defects responsible for energy decay. That foundational work gave the engineers a clear target: minimize contamination at the metal-substrate interface and tighten junction fabrication tolerances so that stray defects have fewer places to hide.
The payoff showed up across the full chip, not just on a single cherry-picked device. The Nature report describes time-averaged quality factors across 45 qubits, with the best individual qubit reaching a T1 lifetime of 1.68 milliseconds and coherence times exceeding 1 millisecond. That consistency matters because quantum processors are only as reliable as their weakest qubit; a single short-lived device in a chain can cascade errors through an entire computation. Achieving uniformly high coherence across dozens of qubits on the same wafer signals that the fabrication process itself is stable and reproducible, not dependent on luck during a particular deposition run, and it opens the door to larger arrays built on the same tantalum-on-silicon recipe.
Gate Fidelity That Shrinks Error Budgets
Long coherence is necessary but not sufficient for practical quantum computing. Qubits also need to execute logic operations, called gates, with extreme precision. Each gate that falls short of perfection introduces noise, and that noise compounds across thousands of sequential operations. The Princeton chip’s single-qubit gate fidelity of 99.994%, as detailed in an open fabrication preprint, leaves a remarkably thin error margin of just 0.006% per operation. For context, many fault-tolerant quantum error-correction schemes require gate fidelities above 99.9% to function at all. Exceeding that threshold by nearly an order of magnitude in error rate means fewer physical qubits would be needed to encode each logical qubit, directly reducing the hardware overhead that makes current quantum machines so large and unwieldy.
This is where the practical consequences become clearest. If a future processor could maintain these fidelity and coherence numbers at scale, the ratio of physical qubits to useful logical qubits would shrink dramatically. That ratio is the central bottleneck in quantum computing today, with many experimental platforms needing thousands of physical qubits to protect a single logical qubit from errors. Cutting that requirement means smaller chips, lower cooling costs, and faster paths to machines that can tackle chemistry simulations, secure communication protocols, or optimization problems beyond the reach of classical supercomputers. It also aligns with system-level roadmaps being developed by collaborative efforts such as the Co-design Center for Quantum Advantage, which emphasize balancing qubit quality, control electronics, and error-correction codes in tandem.
A Manufacturing Path, Not Just a Lab Demo
Academic breakthroughs in qubit performance often stall at the transition from laboratory prototypes to volume production. What distinguishes the Princeton result is that its materials stack, tantalum on high-resistivity silicon, relies on substrates and deposition techniques already familiar to the semiconductor industry. According to engineering statements from the university, the Ta-on-Si design could translate to wafer-scale fabrication, the same large-format manufacturing process used to produce classical computer chips by the billions. That compatibility removes one of the steepest barriers between a promising qubit design and a commercially viable quantum processor, namely the need to invent an entirely new industrial ecosystem around exotic materials.
Andrew Houck and Nathalie de Leon, both named investigators on the project, provided on-the-record comments through Princeton communications framing the result as a step toward processors that can handle real-world problems rather than just laboratory benchmarks. The emphasis on scalability is deliberate. Some quantum hardware efforts have demonstrated chips with very large qubit counts, but devices with lower coherence times and gate fidelities can still face limits in computational reach. Princeton’s contribution flips the priority: fewer qubits, but each one far more reliable, an approach that could prove more efficient once error-correction overhead and cryogenic infrastructure are factored into the overall system design.
Silicon’s Expanding Role in Quantum Networks
The Princeton result is not the only recent advance involving silicon-based platforms in quantum technology. Researchers at UC Santa Barbara have developed a telecom-compatible qubit in silicon designed to operate at wavelengths used by existing fiber-optic infrastructure, as described in a campus news release. That work targets a different piece of the quantum technology puzzle: connecting quantum processors over long distances rather than improving the processors themselves. But the shared reliance on silicon as a host material raises an intriguing possibility. If high-coherence processing qubits and telecom-interface qubits can both be fabricated on silicon platforms, integrating them on a single chip or within a single cooling system becomes far more plausible than if each element required its own bespoke substrate and fabrication line.
The UC Santa Barbara team, whose results were also highlighted in an independent EurekAlert summary, demonstrated that silicon can host quantum states compatible with standard telecommunications hardware while maintaining robustness against certain crystal-scale defects. Taken together with Princeton’s long-lived tantalum-on-silicon qubits, these advances suggest that silicon is emerging as a unifying platform for both on-chip computation and off-chip networking. In a future quantum internet, one could imagine cryogenic racks where high-fidelity superconducting qubits perform calculations, while neighboring silicon-based photonic interfaces translate those fragile states into light pulses that travel across conventional fiber links, all built on manufacturing processes that leverage decades of semiconductor experience.
From Materials Breakthrough to System-Level Impact
The tantalum-on-silicon result also reframes how researchers think about optimizing quantum systems as a whole. Instead of treating coherence times and gate fidelities as isolated metrics, it encourages a co-design perspective in which materials science, device geometry, control electronics, and error-correction algorithms are tuned together. With millisecond-scale lifetimes and ultra-clean gates, for example, code designers can afford deeper circuits and more sophisticated decoding strategies, potentially unlocking algorithms that were previously dismissed as impractical. Conversely, system architects can revisit trade-offs in wiring density, multiplexing schemes, and cryostat design, knowing that the qubits themselves provide a larger performance margin before decoherence sets in.
There are still formidable challenges ahead. Two-qubit gate fidelities must match or approach the single-qubit numbers to fully realize fault-tolerant thresholds, and scaling from dozens to hundreds or thousands of qubits will test the uniformity of the Ta-on-Si process under more demanding conditions. Crosstalk, packaging losses, and control-line heating all become more severe as chips grow. Yet the Princeton work provides a concrete demonstration that materials-level engineering can unlock order-of-magnitude gains in the core figures of merit that govern quantum computation. Coupled with parallel progress in silicon-based networking qubits and coordinated efforts across national quantum centers, it points toward an ecosystem in which high-performing, manufacturable quantum hardware is no longer a distant aspiration but an emerging reality.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.