The U.S. Department of Energy now has two major supercomputing systems aimed at accelerating fusion energy research through artificial intelligence. Argonne National Laboratory’s Aurora exascale machine opened to scientists in 2025, and DOE has awarded a contract for a second system, named Doudna, due to arrive in 2026. Together, these machines represent a deliberate federal bet that AI-driven simulation can solve problems that have stalled fusion progress for decades, particularly the challenge of predicting and preventing plasma disruptions inside experimental reactors.
Aurora Targets Fusion’s Hardest Problem
Fusion reactors must sustain superheated plasma at extreme temperatures, and sudden instabilities called disruptions can damage equipment and halt experiments. Predicting these events fast enough to intervene has been one of the field’s most persistent technical barriers. The Aurora system at Argonne is now being applied to this exact problem, combining high-fidelity physics simulations with machine learning to model plasma behavior at scales that were previously impractical.
What makes Aurora’s role distinct is the integration of simulation, AI training, and data analysis on a single system. Rather than running traditional physics codes and then separately training neural networks on the results, researchers can iterate between both on the same hardware. This collapses the feedback loop between simulation output and model refinement, which matters because fusion disruptions unfold on millisecond timescales. A predictor that takes hours to retrain is far less useful than one that can be updated and redeployed within a single experimental campaign.
Argonne released Aurora to researchers as a combined AI and simulation platform, and early access projects through the INCITE program and Aurora Early Science Program are already running. The fusion work sits alongside other physics and energy technology projects, but the plasma disruption use case is among the most demanding because it requires both extreme computational throughput and low-latency model inference. For fusion teams, the ability to run full-device plasma simulations while simultaneously training control-oriented neural networks is central to testing whether AI can keep reactors stable in real time.
Deep Learning Roots in Disruption Prediction
The scientific foundation for this work predates Aurora by several years. A widely cited Nature paper on deep learning established the Fusion Recurrent Neural Network, or FRNN, as a viable approach for forecasting plasma instabilities in tokamak reactors. That research demonstrated that recurrent neural networks trained on experimental data could identify disruption precursors with enough lead time to be operationally useful, outperforming some traditional rule-based predictors.
Subsequent work moved the FRNN predictor from a research prototype into something closer to a real control tool. A peer-reviewed study indexed by DOE’s Office of Scientific and Technical Information documented the integration of the deep-learning disruption predictor into a plasma control system, using signals from the DIII-D tokamak. That study included interpretability and sensitivity scores for the predictor, addressing a concern that pure black-box AI models would be too opaque for safety-critical fusion applications. By probing which diagnostic channels most strongly influenced the network’s outputs, researchers could compare the AI’s focus with known physics and identify potential failure modes.
The gap between those earlier results and what Aurora enables is computational scale. Training disruption predictors on data from a single tokamak is useful but limited. Fusion devices vary in size, magnetic configuration, and operating regime, so a predictor trained only on DIII-D data may not transfer well to ITER or other next-generation machines. Exascale computing allows researchers to train on far larger and more diverse datasets, and to couple neural network training with first-principles plasma simulations that can generate synthetic disruption scenarios for conditions no existing device has yet produced. This “hybrid data” approach, combining experimental measurements with simulation output, demands both enormous floating-point capacity and high-throughput AI accelerators, precisely what Aurora was designed to provide.
Doudna Extends the Computing Pipeline
While Aurora handles the immediate research workload, DOE is already building its successor pipeline. The department awarded Dell a contract to develop NERSC-10, the next flagship system for the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. The machine is named Doudna, after Nobel laureate Jennifer Doudna, and is due to come online in 2026.
NERSC leadership, DOE officials, and executives from Dell and NVIDIA all provided statements about the system, reflecting the breadth of institutional and corporate investment behind it. Doudna is designed specifically for workloads that integrate simulation, data analysis, and AI, the same hybrid approach that defines the fusion research running on Aurora. Lawrence Berkeley National Laboratory’s communications highlighted Doudna’s relevance to fusion energy among its intended applications, placing plasma physics alongside climate modeling, materials science, and other data-intensive fields.
The timing matters. If Doudna arrives as planned, DOE will have two systems optimized for AI–simulation convergence operating simultaneously across different national laboratories. NERSC serves a broader scientific user base than Argonne’s Leadership Computing Facility, so Doudna could extend fusion AI work to research groups that lack dedicated allocations on Aurora. The existing user portal at NERSC and its allocation processes already connect thousands of researchers to high-performance computing resources, and adding a machine built for hybrid AI workloads changes what those users can realistically attempt.
Equally important is the support infrastructure. NERSC’s extensive documentation and consulting services are geared toward helping domain scientists adapt their codes to new architectures. For fusion researchers, that could mean assistance in porting plasma simulation codes to GPU-heavy nodes, optimizing data pipelines for streaming diagnostics, or scaling deep-learning workflows that were originally developed on much smaller clusters. As Doudna comes online, that human layer of expertise will likely be as important as the raw hardware in determining how quickly fusion projects can take advantage of the new system.
Why Hardware Alone Will Not Solve Fusion
Most coverage of new supercomputers focuses on raw performance numbers and institutional enthusiasm. That framing misses a harder question: whether the scientific software and training data are ready to exploit these machines. Exascale hardware is necessary but not sufficient. The FRNN disruption predictor, for instance, was validated on a single device’s data. Scaling it to work across multiple tokamak designs requires not just more compute but also standardized data formats, curated cross-device datasets, and robust procedures for handling measurement noise and diagnostic gaps.
On the software side, many legacy plasma physics codes were written for CPU-dominated architectures and do not yet fully exploit modern accelerators. Porting these codes to run efficiently on Aurora and Doudna involves algorithmic refactoring, new numerical methods, and careful validation to ensure that physics results remain trustworthy. Without that work, exascale machines risk being underutilized by the very communities they are meant to serve. The same is true for AI models: disruption predictors must be embedded into control frameworks, tested against edge cases, and benchmarked for latency and reliability before they can influence real experimental operations.
Data access and governance present another bottleneck. High-quality disruption prediction depends on years of experimental shots from multiple facilities, each with its own diagnostic configurations and metadata conventions. Building training corpora that span these differences requires coordinated efforts among laboratories, agreements on data sharing, and tools that can automatically align and clean heterogeneous signals. Aurora and Doudna can accelerate the training once the data are in place, but they cannot by themselves solve the institutional coordination problem.
Finally, there is the challenge of interpretability and trust. Fusion experiments are expensive and hardware-limited, so operators are understandably cautious about handing control to AI systems whose failures could damage unique devices. The early work on sensitivity analysis for FRNN is a step toward more transparent models, but scaling to exascale AI introduces new questions about how to audit and certify increasingly complex networks. Addressing those questions will require collaboration among plasma physicists, computer scientists, and control engineers, not just more powerful chips.
In that sense, Aurora and Doudna should be seen less as silver bullets, and more as enabling platforms. They make it possible to explore AI-driven control strategies, multi-device disruption predictors, and synthetic training datasets that were computationally out of reach a decade ago. Real progress toward practical fusion energy will depend on how effectively researchers use these platforms to connect data, models, and experiments into a coherent pipeline. The hardware may grab the headlines, but the harder work lies in turning exascale capability into reliable, physics-informed tools that can keep future reactors stable long enough to matter.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.