Morning Overview

NASA’s AI space chip survived radiation, thermal extremes, and shock testing — performing 500 times faster than every processor currently in orbit

The computer inside NASA’s James Webb Space Telescope, the most expensive science instrument ever launched, runs on a processor roughly as powerful as a mid-1990s iMac. That is not an oversight. It is the cost of building electronics that can survive the radiation, temperature swings, and vibration of space. For decades, every deep-space mission has accepted the same trade-off: reliability over speed. Now a new chip being tested at NASA facilities is showing signs that the trade-off may finally be shrinking.

The agency’s High Performance Spaceflight Computing (HPSC) processor, a radiation-hardened multicore system-on-chip developed by Microchip Technology under contract from NASA’s Jet Propulsion Laboratory, has been undergoing a punishing gauntlet of radiation exposure, thermal cycling, and mechanical shock tests. Early functional results from that campaign show the chip delivering roughly 500 times the computing performance of the radiation-hardened processors currently flying in orbit, according to a project update published by NASA. The original contract target was a 100-fold improvement, a goal the agency laid out when it awarded the development deal.

If the chip reaches flight qualification, it could reshape how spacecraft collect data, navigate hazards, and communicate with Earth for missions stretching into the 2030s and beyond.

Why space computers are stuck in the past

Outside Earth’s magnetic shield, charged particles from the sun and deep space constantly bombard electronics. A single high-energy ion can flip a memory bit, corrupt a calculation, or permanently short-circuit a transistor. To survive that environment, space-grade processors are built on older, larger fabrication nodes with extra shielding and error-correction circuitry baked into the silicon. The result is chips that are extraordinarily tough but painfully slow by modern standards.

The BAE Systems RAD750, the workhorse processor behind missions from the Mars Reconnaissance Orbiter to the Curiosity rover to the James Webb Space Telescope, delivers roughly 266 MHz of single-core performance. A modern smartphone chip is thousands of times faster. That gap has widened dramatically over the past two decades as commercial silicon sprinted to smaller transistor nodes while space-grade designs stayed on older, more radiation-tolerant processes.

The practical consequences are real. A Mars rover that spots an unusual rock formation cannot decide on its own to stop and examine it. Instead, it photographs the terrain, compresses the images, transmits them to Earth, and waits for scientists to analyze the data and send back new driving instructions. With a one-way light delay of up to 24 minutes, that loop can consume most of a working day. Earth-observation satellites face a similar bottleneck: they downlink enormous volumes of raw imagery because they lack the onboard power to sort useful frames from cloud-covered ones before the next ground-station pass.

What HPSC brings to the table

HPSC is designed to close some of that performance gap without abandoning the conservative engineering that keeps spacecraft alive. NASA’s Goddard Space Flight Center describes the chip as a multicore system-on-chip that integrates a time-sensitive networking (TSN) Ethernet switch and built-in fault tolerance. Multiple processing cores can share workloads, and hardware-level protections allow the chip to detect and recover from errors caused by particle strikes rather than simply crashing.

That architecture opens doors that older single-core processors cannot. A rover equipped with HPSC-class computing could run machine-learning models to identify scientifically interesting targets, prioritize observations, and adjust its own driving path without waiting for ground commands. Multiple instruments on a single spacecraft could share a common computing pool instead of each carrying a dedicated controller, saving mass and simplifying wiring. Complex simulations, such as modeling a spacecraft’s thermal behavior during an unexpected attitude change, could run in flight rather than relying on precomputed lookup tables uploaded from Earth.

For human exploration, the implications are equally significant. Faster onboard processing could support real-time hazard detection during lunar or planetary landings, advanced life-support diagnostics, and crew-assist tools that today would overwhelm a RAD750-class computer.

NASA’s Game Changing Development program page lists the project’s milestones: a Critical Design Review was completed in 2024, clearing the way for final layout and fabrication, with tape-out (the step where the finished design is sent to the foundry) scheduled for mid-2025. As of June 2026, NASA has not publicly confirmed whether that tape-out milestone was met on schedule.

What the 500x number actually means

The 500-times figure is striking, but it comes with important context. NASA’s own update describes the result as “indications” from the current test campaign, not a final, independently benchmarked specification. No public documentation identifies the exact legacy processor used as the baseline for the comparison, though the RAD750 and the somewhat newer RAD5500 are the most common radiation-hardened chips in the current fleet. Without a named baseline, a published test methodology, and details about the specific workload, clock speed, and power envelope used, the number cannot be independently reproduced.

Exceeding a design target by five times during early functional testing is unusual. It could reflect genuine architectural headroom in the multicore design, or it could reflect a benchmark that plays to the new chip’s strengths. The most honest reading is that HPSC is on track to deliver a substantial generational leap, but the precise size of that leap under real mission workloads remains to be demonstrated.

Radiation test specifics are also still sparse. NASA has confirmed the chip is undergoing total-dose and single-event-effect testing, standard procedures that expose silicon to proton and heavy-ion beams at particle accelerator facilities. But quantitative results, such as the total ionizing dose the chip can withstand or its vulnerability to single-event latch-up, have not been made public. Those numbers will determine whether HPSC can fly into Jupiter’s intense radiation belts or is better suited for moderate environments like Mars orbit and the lunar surface.

Open questions before first flight

Power and heat are among the biggest unknowns. A multicore, high-throughput processor inevitably generates more waste heat than the simpler chips it replaces, and spacecraft have limited ways to shed that heat. Solar arrays and radioisotope thermoelectric generators impose strict power budgets. NASA has not yet published figures on HPSC’s watts-per-computation or how performance scales when cores are throttled to fit within a constrained power envelope. Mission designers will need those numbers to decide whether to run the chip at full capability or reserve its power for bursty, time-critical tasks like landing sequences or science-data triage.

No specific mission has publicly committed to flying HPSC. Neither NASA’s science directorates nor its human exploration programs have announced integration timelines tying the chip to a named spacecraft. Candidates that could benefit are easy to imagine: the Dragonfly rotorcraft headed to Saturn’s moon Titan, future Artemis lunar surface systems, or a next-generation Mars orbiter. But commitment and imagination are different things, and radiation-hardened chip programs have a history of long timelines between tape-out and first flight. The RAD750, for instance, took roughly a decade from early development to its first operational mission. Any delay in HPSC’s fabrication or qualification could push its debut into the early 2030s.

There is also the competitive landscape to consider. SpaceX has taken a different approach to the radiation problem, flying commercial-grade processors aboard Starlink satellites and relying on software-level redundancy and rapid replacement rather than radiation-hardened silicon. European agencies have invested in the LEON/GR740 processor family, and several startups are exploring RISC-V-based radiation-tolerant designs. HPSC does not need to be the only solution to be valuable, but its adoption timeline will depend partly on whether alternative approaches mature faster.

What this means for the next decade of space missions

The gap between what a spacecraft can think and what it can see has been widening for years. Cameras, spectrometers, and radar systems have grown far more capable, but the onboard computers processing their data have barely budged. HPSC represents the most serious NASA-backed effort in a generation to close that gap.

All load-bearing claims about the chip’s performance, architecture, and test status trace back to primary NASA sources: the JPL project update, the Game Changing Development program page, the Goddard engineering directorate’s technical description, and the original contract award. These are credible institutional documents, but they are also first-party accounts from the organization building the chip. Independent validation from academic groups, defense laboratories, or commercial partners has not yet appeared in the public record.

For now, the evidence points to a credible, well-advanced program that has exceeded its own early benchmarks. The hard part is still ahead: proving the chip can survive the harshest radiation environments in the solar system, fitting its power and thermal demands into real spacecraft designs, and convincing mission planners to trust a new processor with billions of dollars of hardware and years of scientific ambition. If HPSC clears those hurdles, the era of deep-space computers that think like it is still 1995 may finally be coming to an end.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.