Every spacecraft NASA has sent beyond low-Earth orbit over the past two decades has relied on some version of the same brain: the RAD750, a radiation-hardened processor built by BAE Systems that runs at roughly 200 MHz. For comparison, the phone in your pocket is hundreds of times faster. The RAD750 has been remarkably reliable, powering everything from the Mars Reconnaissance Orbiter to the Perseverance rover, but its limitations force mission controllers into a painstaking routine: upload a short sequence of commands, wait for the spacecraft to execute them, download the results, then repeat. On Mars, where a one-way radio signal takes anywhere from 4 to 24 minutes depending on orbital geometry, that loop can stretch a single driving decision into a full day of back-and-forth.
NASA is working to break that cycle. Under a $50 million firm-fixed-price contract awarded to Microchip Technology, the agency is developing the High-Performance Spaceflight Computing processor, or HPSC, a palm-sized system-on-chip designed to deliver at least 100 times the computing power of current flight-qualified hardware. As of spring 2026, engineers at NASA’s Goddard Space Flight Center are running integration tests on the chip, and the agency describes it as the foundational computer for a new generation of missions to the Moon, Mars, and beyond.
Why current spacecraft computers hold missions back
The RAD750 was a breakthrough when it first flew in 2005, but two decades later its architecture creates real operational bottlenecks. During Perseverance’s landing in February 2021, the rover’s Lander Vision System, developed at NASA’s Jet Propulsion Laboratory, compared camera images against orbital maps and steered the descent stage toward a safe touchdown point in Jezero Crater. That terrain-relative navigation worked, but it ran only during the seven minutes of descent. Once on the surface, Perseverance reverted to the slow command-and-wait cadence, typically driving just tens of meters per sol while operators on Earth reviewed hazard imagery overnight.
JPL has pointed to another inefficiency: current onboard computers are sized for peak-demand moments like atmospheric entry, yet those intense phases represent a tiny fraction of a mission’s total lifespan. For the months or years of cruise and surface operations in between, the processor sits largely underutilized, still drawing power but doing relatively little.
What HPSC is designed to change
HPSC attacks both problems. Its architecture includes dedicated AI dataflow processing and scalable vector computing, capabilities that no current flight-qualified processor offers. In practical terms, that means a rover equipped with HPSC could run continuous terrain-relative navigation and hazard avoidance while driving, not just during a brief landing window. Instead of stopping every few meters to photograph the ground and wait for Earth-based operators to approve the next move, the rover could interpret its surroundings and pick a safe path on its own, potentially covering far more ground each Martian day.
The chip also supports power scaling. During quiet cruise phases, non-essential processing functions can be turned off, dropping energy consumption. When the mission demands peak performance, such as during orbital insertion or a landing sequence, the processor ramps back up. NASA’s Goddard Space Flight Center has been integrating HPSC with autonomy software frameworks that handle navigation, guidance, control, and onboard science data analysis.
Two hardware variants are planned. A radiation-hardened version is intended for deep-space and long-duration lunar and Mars missions, where exposure to galactic cosmic rays and solar particle events can corrupt or destroy conventional electronics. A radiation-tolerant version targets satellites and stations in low-Earth orbit, where the planet’s magnetosphere provides partial shielding. The program sits within NASA’s Game Changing Development portfolio, a funding line reserved for technologies the agency considers too risky for individual mission budgets but essential to its long-term roadmap.
What has not been proven yet
The 100x performance target is a contractual requirement that Microchip must meet, not a demonstrated result. No peer-reviewed benchmark data or independent radiation-test results have been published as of mid-2026. Detailed metrics that mission planners need, such as total ionizing dose thresholds, single-event upset rates, and power consumption across different AI workloads, have not appeared in publicly available documents.
NASA has also not announced which mission will fly HPSC first. Candidates that could benefit from the processor’s capabilities include the Dragonfly rotorcraft mission to Saturn’s moon Titan, future lunar surface missions under Artemis, and any successor concepts to Mars Sample Return, but no official selection has been made public. Without a specific flight assignment, the timeline from laboratory prototype to operational hardware remains open-ended.
There are also unresolved questions about how HPSC’s onboard autonomy will interact with deep-space communication links. NASA is separately developing delay-disruption-tolerant networking protocols to improve data relay beyond Earth orbit, but published documents have not yet described how the two systems will work together. A spacecraft smart enough to act on its own will still need clear rules about when to wait for human confirmation and when to proceed independently, especially during fault scenarios where incomplete information could lead to irreversible decisions.
How HPSC fits into a crowded push for smarter spacecraft
NASA is not the only agency chasing more capable onboard computing. The European Space Agency has been developing its own next-generation processors under programs such as the DAHLIA initiative, which targets high-performance, radiation-tolerant chips for future ESA science and exploration missions. In the commercial sector, companies building large satellite constellations are already flying GPU-class processors in low-Earth orbit, taking advantage of the partial radiation shielding that Earth’s magnetosphere provides. Those commercial efforts, however, are not designed to survive the deep-space radiation environment that HPSC must endure for years at a time.
The distinction matters because it frames HPSC’s real competitive challenge: delivering AI-grade processing power inside a radiation-hardened package that can operate far from Earth with minimal power. Commercial processors can be replaced cheaply when a satellite is decommissioned after a few years; a Mars rover’s computer must work flawlessly for the duration of a mission that could last a decade or more.
What engineers and analysts have said
Public statements from people directly involved in the program remain limited. NASA officials have described HPSC as “the brain of the spacecraft” in agency communications, framing it not as an incremental upgrade but as the central computing platform for a generation of missions. Beyond that institutional language, no detailed on-the-record interviews with Microchip engineers, Goddard integration leads, or independent aerospace analysts have appeared in publicly available sources as of June 2026. That absence is itself notable: for a program of this scale and ambition, the lack of published expert commentary means outside observers are still relying almost entirely on NASA’s own program descriptions to assess progress.
If the processor delivers on its specifications, the relationship between ground controllers and their spacecraft changes fundamentally. Landers could adjust their touchdown points in real time when they spot unexpected boulders. Orbiters could autonomously retarget instruments toward transient events, such as a dust storm forming on Mars or a plume erupting from an icy moon, without waiting hours for instructions. Deep-space probes could diagnose hardware faults and route around them before operators even know something went wrong.
Milestones that will separate ambition from proven hardware
The milestones worth watching in the months ahead are concrete and specific: completion of radiation testing at a qualified facility, hardware-in-the-loop simulations running flight-representative software, and eventually a formal announcement that a funded mission has baselined HPSC as its primary computer. Each step would move the chip closer from ambitious specification toward proven capability. Until then, HPSC stands as NASA’s clearest bet that the next generation of explorers, whether rolling across Mars or diving past the outer planets, will be able to think for themselves when Earth is too far away to help.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.