Morning Overview

IBM, Riken, and the Cleveland Clinic simulated a 12,000-atom protein by linking quantum computers with Japanese supercomputers

In late May 2026, a research team from IBM, Japan’s RIKEN institute, and the Cleveland Clinic reported that they had simulated protein-ligand complexes containing up to 12,635 atoms by linking quantum processors with some of the world’s most powerful classical supercomputers. The result, described in a preprint posted to arXiv, represents, by atom count, the largest hybrid quantum-classical chemistry calculation ever applied to a biologically relevant protein system. (Larger molecular simulations have been performed using purely classical methods, but none at this scale have incorporated quantum processors in the workflow.) It is also a concrete signal that quantum hardware has crossed a threshold: it can now participate in molecular simulations at a scale that drug discovery scientists actually care about.

To appreciate why that matters, consider the core problem. Pharmaceutical researchers need to understand, at the electronic level, how a candidate drug molecule binds to a protein target. Classical supercomputers can approximate these interactions, but certain quantum-mechanical effects, especially in molecules with complex electron correlations, push even the best classical methods to their limits. Quantum processors, in theory, handle those effects natively. The catch has always been scale: until now, quantum chemistry demonstrations have been confined to small molecules or toy systems far removed from real biology.

This latest work closes much of that gap.

What the team actually did

The collaboration targeted two protein-ligand complexes with 11,608 and 12,635 atoms, respectively. Rather than asking quantum hardware to shoulder the entire calculation, the team used a fragment-based approach: the protein system was broken into smaller chemical pieces, each tractable for quantum or classical treatment, and the results were stitched back together to reconstruct the full electronic picture.

Two 156-qubit IBM quantum processors, designated ibm_cleveland and ibm_kobe, handled the quantum portions of the workflow, employing up to 94 qubits and executing 9,200 quantum circuits across 10 computational units. The heavy classical lifting fell to the University of Tokyo’s GPU-accelerated Miyabi-G supercomputer and, in earlier stages of the project’s development, RIKEN’s Fugaku, a 152,064-node machine that remains one of the fastest supercomputers on Earth.

The key technical method is called sample-based quantum diagonalization, or SQD. A peer-reviewed paper in Science Advances previously validated SQD on iron-sulfur clusters, biologically important but computationally demanding molecules, showing it could solve chemistry problems beyond the reach of exact diagonalization, a gold-standard classical technique. That validation, while conducted at a smaller scale, provides the methodological foundation for the protein-scale work.

A documented scaling trajectory

The 12,635-atom result did not appear out of nowhere. The team built toward it through a series of progressively larger demonstrations, each documented in its own technical paper.

First, a preprint established the concept of closed-loop electronic structure calculations running across a quantum processor and Fugaku at full scale. That work proved the engineering plumbing could handle real-time data exchange between quantum and classical machines separated by significant physical distance.

Next, a separate preprint applied the fragment-based quantum workflow to the roughly 300-atom Trp-cage miniprotein, a standard test case in computational biology. The jump from Trp-cage to the new protein-ligand complexes represents roughly a 40-fold increase in system size, a leap that required not just better algorithms but also careful orchestration of thousands of quantum circuits alongside massive classical resources.

IBM and RIKEN have framed this progression as part of their quantum-centric supercomputing program, a strategy in which quantum devices function as specialized accelerators embedded inside large classical workflows rather than as stand-alone machines. The Cleveland Clinic joined the effort through its Discovery Accelerator partnership with IBM, announced in 2021, which aims to apply quantum computing and AI to biomedical research.

What remains uncertain

The preprint has not yet been peer reviewed, and several important questions remain open.

The Cleveland Clinic’s specific contribution is not detailed in publicly available materials beyond its affiliation on the paper. Whether the clinic provided biological targets, validated results against experimental assay data, or played a primarily computational role is unclear from the arXiv listing alone.

Error rates and fidelity metrics for the 94-qubit circuits have not been independently reported outside the preprint. Quantum processors at this scale are notoriously noisy, and the degree to which error mitigation techniques preserved the accuracy of the final energy calculations will be a central question during peer review. The precursor papers describe error-handling strategies, including circuit optimization and statistical post-processing, but none provide a direct comparison showing how the 12,000-atom results stack up against purely classical benchmarks for the same systems.

Neither RIKEN nor the University of Tokyo has published separate institutional records detailing the exact supercomputer resources allocated to these runs. The technical specifics come entirely from the research team’s own descriptions, which is standard for preprints but means the infrastructure claims have not been independently confirmed.

There is also the question of cost and speed. High-level density functional theory and correlated wavefunction methods already support large-scale molecular simulations on modern supercomputers. Without detailed timing and resource comparisons, it is difficult to know whether the quantum component currently accelerates the overall computation, merely keeps pace, or even slows the workflow while offering a potential path to future advantage as hardware improves.

How this differs from AI protein tools

Readers familiar with tools like AlphaFold may wonder how this work relates. The distinction is fundamental. AlphaFold and similar AI systems predict protein structures, essentially the 3D shape a protein folds into, using pattern recognition trained on known structures. They do not simulate the underlying quantum-mechanical behavior of electrons.

What the IBM-RIKEN-Cleveland Clinic team is doing operates at a deeper physical level: calculating electronic energies and interactions that govern how a drug molecule binds to a protein pocket. These electronic-structure calculations can, in principle, capture subtle effects that shape-based or force-field-based methods miss, such as charge transfer between a drug and its target, or the behavior of metal ions at an enzyme’s active site. The two approaches are complementary, not competing.

Where this sits in the quantum computing landscape

IBM is not the only company pursuing quantum chemistry at scale. Google has published work on quantum simulations of chemical systems using its Sycamore and Willow processors. Microsoft has invested heavily in topological qubits with an eye toward chemistry applications. Startups like Zapata AI and QunaSys have built software platforms for quantum chemistry workflows.

What distinguishes the IBM-RIKEN result is the sheer system size, 12,635 atoms in a biologically relevant protein-ligand complex, and the demonstrated integration with world-class supercomputing infrastructure. Most competing demonstrations have targeted molecules with far fewer atoms. The fragment-based approach, combined with the engineering to orchestrate quantum and classical resources across continents, gives this work a practical edge even if the quantum hardware itself is not yet delivering a raw performance advantage over classical alternatives.

Why the 12,635-atom simulation matters for drug discovery

The most honest reading of this result is that it is an important engineering milestone rather than a definitive scientific breakthrough. The team has shown that hybrid quantum-classical pipelines can orchestrate thousands of quantum circuits alongside some of the world’s largest supercomputers, and that such pipelines can target protein-ligand complexes of genuine pharmaceutical interest.

What it has not yet shown is that quantum resources are essential for these specific calculations, or that they deliver answers materially different from what the best classical tools produce. That distinction matters enormously for drug discovery, where the bar is not “can you run the simulation?” but “does the simulation change which compounds you advance to clinical trials?”

The next steps are well defined. Independent groups will need to reproduce similar workflows on different hardware and for different molecular systems. Comparative studies against state-of-the-art classical methods will have to quantify any accuracy or performance gains. Peer review will probe the assumptions in the fragment-based approach, the robustness of error mitigation, and the reliability of the reported energy landscapes.

Until then, the 12,635-atom simulation stands as the most ambitious demonstration yet that quantum computers can participate meaningfully in the kind of molecular science that underpins modern drug development. The road to practical quantum advantage in chemistry is still long, but this result makes it considerably harder to argue that the destination is purely theoretical.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.