Morning Overview

Inside the World of Supercomputers and Their Power

Supercomputers, such as Frontier at Oak Ridge National Laboratory in Tennessee, have reached exascale computing performance, achieving 1.1 exaFLOPS in May 2022. These powerful machines, constructed by companies like HPE Cray and powered by AMD processors, are transforming various fields, from climate modeling to nuclear stockpile stewardship at facilities like Lawrence Livermore National Laboratory.

The Dawn of Supercomputing

Image Credit: The original uploader was TexasDex at English Wikipedia. - CC BY-SA 3.0/Wiki Commons
Image Credit: The original uploader was TexasDex at English Wikipedia. – CC BY-SA 3.0/Wiki Commons

The journey of supercomputing began with ENIAC, the first general-purpose electronic computer, completed in 1945 at the University of Pennsylvania. Weighing a hefty 30 tons, ENIAC could perform 5,000 additions per second, setting the stage for the modern supercomputers we see today. This technological marvel was just the beginning of a new era in computing.

Fast forward to 1964, Seymour Cray founded Control Data Corporation and designed the CDC 6600, which achieved 3 megaFLOPS. This machine was hailed as the first supercomputer, bringing about a revolution in scientific computing. In 1993, Japan’s Fujitsu developed the Numerical Wind Tunnel supercomputer, which was operational at the National Aerospace Laboratory. This supercomputer simulated airflow at 170 gigaFLOPS, contributing significantly to aerodynamic research.

As supercomputing advanced, the 1970s saw the introduction of vector processing, a method that allowed for the simultaneous processing of multiple data points. Cray Research, founded by Seymour Cray, was at the forefront of this development with the Cray-1, which delivered 160 megaFLOPS. This supercomputer, with its iconic C-shaped design, was used in a variety of applications, including weather forecasting and nuclear physics.

In the 1990s, parallel processing became the norm, with supercomputers utilizing multiple processors to divide and conquer complex computational tasks. The Intel Paragon, operational in 1993, was one such machine, achieving 143 gigaFLOPS. As we entered the 21st century, the focus shifted towards petaFLOPS and beyond, with IBM’s Roadrunner, installed at Los Alamos National Laboratory in 2008, becoming the first supercomputer to break the petaFLOP barrier.

Powering Weather and Climate Predictions

Image Credit: National Weather Service - Public domain/Wiki Commons
Image Credit: National Weather Service – Public domain/Wiki Commons

Supercomputers have become indispensable in weather and climate predictions. The European Centre for Medium-Range Weather Forecasts uses Atos-built supercomputers to run the Integrated Forecasting System, which predicts global weather up to 10 days in advance with a resolution of 9 km. These machines, located in Reading, UK, have become a cornerstone in weather forecasting.

In the United States, NOAA’s GFS model runs on IBM supercomputers at the National Centers for Environmental Prediction in College Park, Maryland. These supercomputers process a staggering 20 petabytes of data daily, aiding in hurricane tracking and climate simulations. Meanwhile, China’s Tianhe-1A, installed in 2010 at the National Supercomputing Center in Guangzhou, simulated typhoon paths at 2.5 petaFLOPS, enhancing disaster preparedness in the region.

Supercomputers have also been instrumental in long-term climate modeling. The Earth Simulator, developed by NEC and operational in Japan since 2002, was used to simulate global climate patterns over a span of thousands of years. This supercomputer, which achieved a peak performance of 35.86 teraFLOPS, helped scientists understand the impact of human activities on climate change.

Furthermore, supercomputers have been used to study and predict the effects of climate change on specific regions. For instance, the High-Performance Computer for Meteorology (HPC-M), operational at the German Climate Computing Center since 2015, has been used to simulate the impact of climate change on the North Sea region. These simulations have helped policymakers make informed decisions about coastal protection and renewable energy deployment.

Simulating Nuclear Weapons and Security

partrickl/Unsplash
partrickl/Unsplash

Supercomputers also play a crucial role in nuclear weapons simulation and security. The Sierra supercomputer at Lawrence Livermore National Laboratory, deployed in 2018 with IBM Power9 processors, runs multifphysics simulations for nuclear stockpile stewardship at 125 petaFLOPS. This high-performance computing capability is vital for maintaining the safety and reliability of the nation’s nuclear arsenal.

The U.S. Department of Energy’s ASC program utilizes the Frontier supercomputer at Oak Ridge to certify the safety of the U.S. nuclear arsenal without the need for physical tests. This supercomputer can complete hydrodynamic simulations in just a few days. In France, the TERA supercomputer at the CEA in Bruyères-le-Châtel simulates nuclear fission processes at 1.5 petaFLOPS, supporting non-proliferation efforts since 2016.

Supercomputers have also been used to simulate the effects of aging on nuclear weapons, a process known as stockpile aging. The Trinity supercomputer at Los Alamos National Laboratory, operational since 2015, has been used for this purpose. With a peak performance of 41.5 petaFLOPS, Trinity has helped ensure that the U.S. nuclear arsenal remains safe and effective as it ages.

Moreover, supercomputers have been used to simulate the effects of nuclear detonations on infrastructure and the environment. The Sequoia supercomputer at Lawrence Livermore National Laboratory, operational since 2012, has been used for such simulations. With a peak performance of 20 petaFLOPS, Sequoia has provided valuable data for planning and preparedness in the event of a nuclear incident.

Medical and Scientific Breakthroughs

Image Credit: 文部科学省 - CC BY 4.0/Wiki Commons
Image Credit: 文部科学省 – CC BY 4.0/Wiki Commons

Supercomputers have also made significant contributions to medical and scientific research. The Summit supercomputer at Oak Ridge, operational since June 2018, analyzed genomic data for COVID-19 drug discovery. It processed 200 petaFLOPS to identify potential treatments in just a few weeks. This rapid analysis was instrumental in the global fight against the pandemic.

Japan’s Fugaku supercomputer, built by Fujitsu and RIKEN in Kobe and ranked top in 2020, simulates protein folding for Alzheimer’s research at 442 petaFLOPS. Meanwhile, IBM’s Eagle quantum supercomputer prototype, announced in 2021 with 127 qubits, accelerates drug simulation for rare diseases by solving complex molecular dynamics.

Supercomputers have also been used to simulate the human brain, a task that requires immense computational power due to the complexity of the brain’s neural networks. The Blue Brain Project, based in Switzerland, has used IBM’s Blue Gene supercomputers for this purpose. These simulations have provided valuable insights into brain function and disease.

Furthermore, supercomputers have been used to simulate the evolution of the universe. The Cosmology Machine, part of the Virgo Consortium and housed at the University of Durham, has been used to simulate the formation of galaxies and the distribution of dark matter. These simulations have helped scientists better understand the origins and structure of the universe.

The Race for Exascale and Beyond

Image Credit: Мой Компьютер - CC BY 3.0/Wiki Commons
Image Credit: Мой Компьютер – CC BY 3.0/Wiki Commons

The race for exascale computing and beyond is heating up. El Capitan, set for deployment in 2023 at Lawrence Livermore by HPE, targets 2 exaFLOPS for national security simulations using AMD Instinct GPUs. This supercomputer will be a significant leap forward in computational power.

China’s Sunway TaihuLight, operational since 2016 in Wuxi at 93 petaFLOPS, uses indigenous SW26010 processors for seismic modeling without U.S. chips. Meanwhile, the U.S. Exascale Computing Project, funded with $600 million since 2016, aims to deliver five exascale systems by 2024 for AI-driven scientific discovery. The race for exascale computing is not just about speed, but also about advancing scientific research and solving complex global challenges.

As we look to the future, quantum computing represents the next frontier in supercomputing. Quantum computers, such as IBM’s Q System One, use quantum bits, or qubits, to perform calculations. These machines have the potential to solve problems that are currently beyond the reach of classical supercomputers.

Meanwhile, neuromorphic computing, which seeks to mimic the human brain’s structure and function, represents another promising direction for supercomputing. Intel’s Loihi, a neuromorphic research chip, is one example of this new approach. As we continue to push the boundaries of computing, supercomputers will undoubtedly play a crucial role in shaping our future.