Morning Overview

NASA’s HPSC processor clocked 500 times the speed of current spaceflight computers during radiation and shock testing at JPL

Inside a shielded test chamber at the Jet Propulsion Laboratory in Pasadena, California, a processor no larger than a drink coaster has been absorbing simulated cosmic radiation, violent mechanical shocks, and temperature extremes since February 2026. The chip is NASA’s High Performance Spaceflight Computing processor, or HPSC, and during those trials it recorded processing speeds roughly 500 times faster than the radiation-hardened computers currently flying on American spacecraft. If the results survive full qualification, the HPSC could mark the single largest leap in onboard computing power since the space agency began sending machines beyond Earth orbit.

A processor built to replace decades-old silicon

The computer at the heart of most active NASA missions is the RAD750, a radiation-hardened chip based on a PowerPC architecture designed in the early 2000s. It powers the Curiosity and Perseverance Mars rovers, the Lunar Reconnaissance Orbiter, and a roster of deep-space probes. By terrestrial standards, the RAD750 is glacially slow: it runs at roughly 200 MHz with processing throughput that a modern smartphone surpasses by orders of magnitude. But it was built to survive, not to sprint. Its hardened transistors can shrug off the charged-particle bombardment that would corrupt or destroy a commercial chip within hours.

That tradeoff between durability and speed has defined spaceflight computing for decades. HPSC is NASA’s attempt to collapse the gap. The program launched in 2021 under the Space Technology Mission Directorate with a $50 million firm-fixed-price contract awarded specifically to develop the processor, not the broader HPSC ecosystem. That contract scope covered a chip capable of Ethernet networking, artificial intelligence and machine-learning workloads, and flexible power management, all capabilities absent from current flight-qualified hardware.

The chip cleared its critical design review in 2024. Tape-out, the step where a finalized design is sent to a fabrication facility, followed in mid-2025. The first physical processors were manufactured later that year, according to NASA’s program overview. In February 2026, the team confirmed the hardware was alive and functional by sending its first successful “Hello Universe” message from the new silicon, a traditional first-boot milestone in processor development.

Where the 500x number comes from

The performance claim originates from a February 2026 update published by NASA during the JPL test campaign. The HPSC project manager at JPL described the chip as operating at 500 times the performance of radiation-hardened processors currently in use, a comparison drawn against the class of hardware flying on active missions.

That figure is notably higher than the benchmark NASA cited when the program began. The original contract announcement described capabilities “up to 100 times faster” than state-of-the-art space computers. NASA has not publicly reconciled the two numbers. The most likely explanations are that the 100x target was a conservative contractual floor that the final silicon exceeded, or that the two figures measure different workload types. Both scenarios are common in semiconductor development, but without a published benchmark methodology, the 500x result is best treated as a preliminary test measurement rather than a final, independently verified specification.

As of spring 2026, the chip is being evaluated across four categories: power draw, raw performance, reliability, and radiation tolerance. Those represent the standard qualification gauntlet for any space-grade processor. NASA has not yet released specific radiation dosage levels the chip has survived, detailed power-consumption figures under sustained AI workloads, or the thermal and vibration profiles used in testing. The full qualification data, including total ionizing dose tolerance, single-event upset rates, and latchup immunity, will ultimately determine whether HPSC can handle environments as harsh as Jupiter’s radiation belts or the lunar surface’s 300-degree temperature swings.

What faster computing would change in space

The practical stakes are enormous. Current spaceflight computers are so limited that the bulk of science data collected by probes and rovers must be transmitted to Earth for meaningful analysis. Some onboard processing already exists: the Perseverance rover, for example, uses a system called AEGIS to autonomously select rock targets for its instruments. But those capabilities are tightly constrained by the RAD750’s throughput. A round trip for data and commands between Earth and Mars can take anywhere from 4 to 24 minutes depending on orbital positions, and delays to the outer solar system stretch into hours.

A processor 500 times faster could shift that equation. Spacecraft might analyze data onboard, compress or prioritize transmissions, and discard low-value measurements before they ever reach the Deep Space Network. That would effectively multiply the science return of every bit sent home, a critical advantage as missions generate increasingly large datasets from high-resolution cameras, spectrometers, and radar instruments.

Autonomy stands to benefit just as directly. Faster radiation-hardened computing would let landers and rovers run more sophisticated hazard-avoidance algorithms, adaptive sampling strategies, and fault-management routines without waiting for ground control. Spacecraft in distant orbits could adjust trajectories based on local conditions in near real time rather than following preplanned sequences built around worst-case assumptions. For crewed missions beyond low Earth orbit, high-performance processors could support advanced life-support monitoring, robotic assistants, and real-time medical decision tools during the long communication blackouts that come with deep-space travel.

Machine learning, currently limited to experimental payloads on a handful of missions, could become a core function. An HPSC-class processor might run onboard classifiers to detect transient atmospheric events on other planets, identify geological features worth closer inspection, or flag spacecraft health anomalies before they escalate into failures.

What still has to happen before HPSC flies

Every one of those scenarios depends on more than raw speed. Predictable performance under sustained radiation and thermal stress is what separates a promising chip from a flight-qualified one, and that is precisely what the JPL campaign is designed to prove. NASA has not announced which mission will be the first to carry HPSC hardware, nor has it published an integration timeline. No mission manifest naming HPSC as baselined hardware has appeared in public program documents as of June 2026.

The missing details about power budgets, environmental limits, and benchmark conditions are not academic footnotes. They will determine how aggressively mission designers can rely on the processor, how many redundant copies a spacecraft must carry, and how much of a vehicle’s limited mass and power budget must be reserved for computing. Until NASA releases full qualification results, the safest reading is that HPSC is a potentially transformative technology still working through the most demanding phase of its path to space, one that has hit every publicly stated milestone on schedule since the program began five years ago.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.