Image Credit: cpradi - CC BY 2.0/Wiki Commons

The first artificial intelligence model trained entirely in orbit has turned a test satellite into a tiny, solar-powered data center, and it runs on a single Nvidia chip. By pushing a full training run off the planet and into low Earth orbit, the startup behind the mission is testing whether the future of large-scale computing might live above the atmosphere instead of in sprawling server farms on the ground.

What sounds like a science fiction stunt is in fact a tightly engineered proof of concept: a Washington-based company used an Nvidia H100 graphics processing unit on a dedicated satellite to train a working language model in space, then talk back to Earth. The experiment hints at a new class of orbital infrastructure that could reshape how I think about everything from climate monitoring to secure communications.

The startup that put a data center in orbit

The company at the center of this experiment is Starcloud, a young firm that has staked its identity on turning satellites into high performance computers rather than just cameras or relays. Based in Washington, Starcloud has positioned itself as a bridge between the booming launch industry and the equally explosive demand for AI compute, arguing that the vacuum of space can double as a cooling system for dense chips. Reporting on the mission describes Starcloud as Nvidia backed, a detail that signals how closely the startup is aligned with the chipmaker that already dominates AI training on Earth, and that backing is explicitly highlighted in coverage of Nvidia-backed Starcloud.

Starcloud’s leadership has framed the mission as a first step toward orbital data centers that can host advanced models without the land, water, and power constraints that dog terrestrial facilities. In interviews, the company’s CEO, Philip Johnston, has described a roadmap that starts with a single experimental satellite and scales to constellations that could host large language models and other workloads in low Earth orbit. The company’s Washington roots and its decision to fly a flagship Nvidia H100 GPU into space were both underscored in reporting that detailed how Washington-based Starcloud launched the mission and pitched it as a preview of orbital data centers.

How an Nvidia H100 ended up in low Earth orbit

To turn a satellite into a training rig, Starcloud had to do something that, until now, belonged more to PowerPoint decks than to flight manifests: strap a top tier data center GPU to a spacecraft bus and light the rocket. The company flew an Nvidia H100 graphics processing unit on a test satellite earlier in the year, treating the chip not as a payload controller but as the main event. That hardware choice matters, because the H100 is the same class of GPU that powers flagship AI clusters on the ground, and coverage of the mission notes that Starcloud flew up the Nvidia H100 specifically to test whether a full training run was possible in orbit.

Getting that chip to survive launch and operate in the harsh radiation environment of low Earth orbit required more than just bolting a server board into a satellite frame. Starcloud’s engineers had to design power, cooling, and shielding systems that could keep the H100 within its operating envelope while the spacecraft relied on solar panels and batteries instead of a data center’s redundant feeds. One account of the mission describes the satellite as overcoming the usual constraints of space hardware by centering the design on the Nvidia H100 graphics processing unit, a detail that underlines how the chip was treated as the payload rather than a peripheral.

Training a language model in space, not just running one

What sets Starcloud’s mission apart from earlier “AI in space” demos is that the team did not simply upload a pre-trained model and run inference. Instead, the company trained an artificial intelligence model from scratch on the orbiting GPU, turning the satellite into a self-contained learning system. Reporting on the project is explicit that the Nvidia-backed startup became the first company to train an advanced AI model in space, rather than just hosting one, and that distinction is central to coverage that describes how Nvidia-backed startup crossed that threshold.

The model itself was a compact language system, small enough to be trained within the power and bandwidth limits of a single H100 but sophisticated enough to behave like a chatbot. Starcloud’s team used a NanoGPT implementation and fed it a complete literary corpus, turning the satellite into a kind of orbital writing workshop. The company’s chief engineer, Adi, has been cited explaining that the NanoGPT implementation was trained on the complete works of Shakespeare, a detail that appears in reporting on an AI model trained in space and that neatly captures the mix of technical ambition and theatrical flair behind the experiment.

“Greetings, earthlings” and the first orbital chatbot

Once the training run completed, Starcloud did what any modern AI team would do with a fresh model: it turned it into a chatbot and started asking questions. The company has said that users on the ground could send prompts to the satellite and receive responses generated entirely in orbit, a loop that transformed the spacecraft into a talking node in low Earth orbit. One report quotes the system greeting users with the phrase “Greetings, earthlings,” a line that has already become shorthand for the mission and that appears in coverage of how “Greetings, earthlings” became the playful proof that the model was live.

Behind the theatrics, the chatbot served a serious purpose: it demonstrated that a full training and inference pipeline could be closed in orbit, with the satellite handling data ingestion, model updates, and response generation without relying on a ground-based GPU cluster. Starcloud has also highlighted that the same H100 GPU used for the Shakespeare-trained NanoGPT can run other models, including Google Gemma, on the same platform. A discussion among early observers notes that the H100 GPU was confirmed running Google Gemma in orbit on a solar powered satellite, a detail captured in a thread about how the H100 GPU confirmed running Google Gemma extends the mission beyond a single demo model.

Why Nvidia is betting on orbital compute

Nvidia’s fingerprints are all over this mission, from the H100 hardware to the company’s decision to publicly align itself with Starcloud’s ambitions. The chipmaker has already become synonymous with AI training on Earth, and its support for a startup that wants to move that training into orbit suggests a belief that demand for compute will eventually outgrow terrestrial constraints. Coverage of the mission repeatedly describes Starcloud as Nvidia backed and notes that the startup is associated with Nvidia’s NVDA stock symbol, a connection that is spelled out in reporting on how a Nvidia-backed (NVDA) startup is positioning itself to host large-scale data centers in orbit.

For Nvidia, orbital compute is not just a branding exercise, it is a way to keep selling high end GPUs into new markets as terrestrial data centers run into political and physical limits. The same H100 that powers hyperscale clusters for language models like GPT-4 can, in principle, power constellations of satellites that train and serve models for Earth observation, telecommunications, or defense. One account of the mission notes that the Starcloud satellite is part of a broader race to build orbital data centers, a race that Nvidia is now part of through its backing of Nvidia-backed orbital computing experiments that treat space as the next frontier for its GPUs.

From Shakespeare to Gemma: what the model actually did

It is tempting to treat the Shakespeare-trained NanoGPT as a gimmick, but the choice of corpus and architecture was a calculated way to stress test the satellite’s capabilities without overwhelming it. Training on the complete works of Shakespeare gave the model a rich, bounded dataset that could be fully stored and processed on the spacecraft, while the NanoGPT implementation kept the parameter count within the limits of a single H100. Reporting on the mission notes that the NanoGPT implementation was trained on Shakespeare’s full body of work, a detail that appears in coverage of Shakespeare as the chosen training data.

Once that training run proved the concept, Starcloud used the same H100 GPU to run other models, including Google Gemma, which is designed to be efficient enough for smaller hardware while still delivering strong language capabilities. Observers have pointed out that the H100 GPU on the satellite was confirmed running Google Gemma in orbit, a fact that underscores the platform’s flexibility and is highlighted in a discussion of how the Google Gemma model shared the same solar powered hardware as the Shakespeare chatbot.

Why train in space at all?

Training a model in orbit is harder and more expensive than spinning up a cloud instance, so the obvious question is why anyone would bother. Starcloud’s answer is that space offers unique advantages for certain workloads, especially those that rely on data generated in orbit. By training models directly on satellites, operators can avoid downlink bottlenecks, process sensitive information without ever transmitting raw data to the ground, and take advantage of the cold vacuum for more efficient thermal management. Coverage of the mission frames it as part of a broader push toward orbital data centers, with one report describing how the race to build such infrastructure is heating up as orbital data center concepts move from pitch decks to hardware.

There is also a strategic angle. Governments and companies that depend on satellite imagery, secure communications, or resilient infrastructure are increasingly interested in systems that can operate even if ground networks are disrupted. An orbital cluster of GPUs that can train and run models autonomously would be a powerful asset in that context, and Nvidia’s involvement suggests that the hardware ecosystem is ready to support it. Reporting on the mission notes that the Nvidia-backed startup is explicitly targeting the ability to host large-scale data centers in orbit, a goal that is spelled out in coverage of how large-scale data centers are part of Starcloud’s long term plan.

The orbital data center race and its skeptics

Starcloud is not alone in imagining servers in space, but it is the first to show a full AI training loop running on a satellite with a flagship GPU. That milestone has turned the company into a reference point in what some analysts are already calling an orbital data center race, a competition that pits startups and incumbents against each other to see who can make space based compute economically viable. One report on the mission explicitly frames Starcloud’s achievement as a sign that the orbital data center race is heating up, using the “Greetings, earthlings” chatbot as a symbol of how the race heats up around this new class of infrastructure.

Not everyone is convinced that the economics will work. Launch costs, radiation hardening, and the difficulty of servicing hardware in orbit all weigh against the idea of treating satellites like disposable servers. Some early commentary on the mission has raised questions about whether the benefits of in-orbit training justify the expense, especially when terrestrial data centers continue to scale. A discussion thread about the mission, for example, notes that the H100 GPU running Google Gemma is solar powered but also asks whether the launch cost is too high, a skepticism that surfaces in the same conversation that confirms the solar-powered nature of the mission.

From proof of concept to product

For now, Starcloud’s orbital training run is a proof of concept, but the company is already signaling that it wants to turn the technology into a product. That likely means selling access to orbital compute capacity, bundling satellite time with AI training services, or partnering with organizations that need to process data in space. The company’s public messaging has emphasized that the same architecture used for the Shakespeare chatbot can be applied to other models and workloads, including those that never send raw data to Earth. A social media post celebrating the mission, for instance, highlights that Starcloud, with Nvidia’s support, achieved the first AI model trained in space using an H100, a milestone that is captured in an update about how Starcloud, with Nvidia’s support, pulled off the feat.

Turning that feat into a repeatable service will require more satellites, more automation, and a clearer sense of which customers are willing to pay a premium for orbital training. Starcloud’s backers appear to be betting that defense agencies, climate researchers, and telecom operators will see value in having AI models that live and learn in space. The company’s association with Nvidia and the NVDA ticker symbol has already drawn attention from investors who see orbital compute as a way to extend the AI boom into a new domain, a connection that is made explicit in coverage of how a Nvidia-backed (NVDA) startup is now part of the broader AI investment story.

What comes after the first orbital AI training run

The first AI model trained in space will not be the last. Starcloud’s mission has already inspired comparisons to the early days of cloud computing, when running workloads on someone else’s servers felt exotic until it became the default. If orbital compute follows a similar trajectory, the Shakespeare chatbot and the “Greetings, earthlings” demo will be remembered as the moment when GPUs left the data hall and took up residence in low Earth orbit. One report on the mission notes that the Starcloud-1 satellite overcomes traditional space hardware limits by centering its design on the Nvidia H100, a detail that appears in coverage of the Starcloud-1 satellite and that hints at how future spacecraft might be built around compute rather than sensors.

There is also a consumer angle, however distant. As AI models become embedded in everything from smartphones to cars, the idea of tapping into orbital compute for certain tasks may move from novelty to selling point. A search for products tied to this emerging ecosystem already surfaces references to AI hardware and related gear, including listings that connect to product searches that sit at the intersection of space tech and AI. For now, the Nvidia H100 on Starcloud’s satellite is a singular object, a lone chip circling the planet and composing Shakespearean verse, but it points toward a future in which the phrase “in the cloud” might literally mean “in orbit.”

More from MorningOverview