Aetherflux Inc. wants to put AI servers in orbit, and it is giving itself less than two years to do it.
The California-based startup, which originally set out to beam solar energy from space to the ground via laser, disclosed in May 2025 that it is pivoting toward something more ambitious: an orbital data center program called “Galactic Brain.” The company’s announcement targets a first deployment by the first quarter of 2027, arguing that space offers two things terrestrial data centers are running out of: power and cooling.
The pitch lands at a moment when those shortages are no longer theoretical. The U.S. Department of Energy projected in late 2024 that data centers could consume up to 12 percent of the nation’s electricity by 2028, roughly triple their 2023 share. Utilities in Northern Virginia, the world’s densest data center corridor, have delayed new grid connections by years. Several municipalities have imposed moratoriums on new facilities. Against that backdrop, Aetherflux is making a provocative case: if the ground cannot supply enough power and cooling for AI, move the computers somewhere that can.
The physics behind the pitch
In low Earth orbit, sunlight arrives unfiltered by atmosphere and is available for roughly 90 minutes of every 92-minute orbital pass, with only brief eclipse periods. Aetherflux says it plans to harvest that energy with onboard solar arrays and use it to run compute hardware directly in space, skipping the step of beaming power down to Earth.
Cooling is the other half of the equation. On the ground, data centers rely on water-intensive chillers or massive air-handling systems to pull heat away from processors. In orbit, objects can radiate heat as infrared energy directly into the near-absolute-zero vacuum of space. A research preprint hosted on arXiv describes a tether-based architecture for solar-powered orbital data centers capable of generating power in the multi-megawatt range, with radiative panels serving as the primary thermal management system. The paper, which has not yet undergone formal peer review, calls the cold of space “a far more efficient heat sink than anything available on Earth’s surface.”
A second arXiv preprint tackles a question the physics alone cannot answer: which AI workloads actually make sense to run in orbit? The authors propose a semantic framework that sorts tasks by their data transfer requirements, latency sensitivity, and proximity to sensor sources. Their conclusion is that inference, the process of running a trained model to generate predictions or outputs, is a stronger candidate for orbital placement than training. Training large AI models requires moving enormous volumes of data between tightly coupled GPU clusters for weeks at a time, conditions that are difficult to replicate across a constellation of satellites. Inference near a data source, such as processing imagery directly aboard an Earth observation satellite, could reduce latency and avoid the cost of downlinking raw feeds to ground-based clouds.
Aetherflux is not alone
The idea of computing in orbit is not unique to Aetherflux. Lumen Orbit, a startup that emerged from Y Combinator, announced its own orbital data center plans in 2024 and has raised venture capital to pursue them. The European Space Agency has funded studies on in-orbit cloud computing. And the broader space-solar sector, which includes firms like Virtus Solis and Space Solar in the U.K., has attracted growing interest from governments and investors looking for alternatives to grid-constrained terrestrial power.
What distinguishes Aetherflux’s announcement is the specificity of its timeline. A Q1 2027 target is aggressive by any measure. Most space-based solar power concepts remain in the study or early prototype phase, and none have demonstrated commercial-scale compute operations in orbit.
The long list of unknowns
Between a press release and a functioning orbital data center sits a gauntlet of technical, regulatory, and financial hurdles that Aetherflux has not yet publicly addressed.
Regulatory approvals are missing. No public filings from the FCC or FAA confirm that Aetherflux has secured launch licenses, orbital slots, or spectrum allocations for data relay. Without those permissions, the Q1 2027 date is a company aspiration, not a scheduled event. Regulatory timelines in the space industry routinely stretch beyond initial projections, even for well-funded operators with flight heritage.
Cost projections have not been disclosed. Aetherflux has released no economic models, pricing structures, or per-compute-unit cost estimates. Launch costs have dropped sharply over the past decade thanks to reusable rockets from SpaceX and others, but placing server-grade hardware in orbit, maintaining it, insuring it, and beaming results back to Earth is a fundamentally different cost equation than leasing rack space in a warehouse. The arXiv papers provide engineering parameters but stop short of full lifecycle cost analysis.
No prototypes have flown. The tether-based architecture paper discusses radiation shielding and thermal cycling, two of the harshest challenges for electronics in space. But there is no publicly available evidence that Aetherflux has built, tested, or launched hardware that validates these designs under real orbital conditions. Terrestrial data centers are engineered for continuous uptime with redundant power, cooling, and networking. Matching that reliability in orbit, where hardware faces charged-particle bombardment and micro-meteoroid impacts, remains unproven.
No customers have signed on. None of the major AI companies driving the current surge in compute demand, including Microsoft, Google, Amazon, or Meta, have publicly confirmed interest in purchasing orbital compute capacity. Cloud buyers tend to be conservative about mission-critical workloads, and shifting them to an unproven platform would require strong guarantees on latency, security, and long-term support that Aetherflux has not yet offered.
The addressable market may be narrow. If orbital data centers are best suited for inference tasks near sensors, the total demand could represent only a sliver of the broader AI compute market, which is currently dominated by massive training runs. Edge-adjacent inference, such as processing satellite imagery on orbit, is a natural fit, but it is not clear that this segment alone can sustain the high fixed costs of space infrastructure.
Data governance is uncharted territory. Moving sensitive data or AI models into orbit raises jurisdictional questions that have not been fully mapped. Governments may seek to regulate or restrict certain types of processing in space, particularly where military or dual-use applications are involved. Aetherflux has not outlined how it plans to handle export controls, encryption requirements, or cross-border data transfer rules.
Where orbital compute stands against the terrestrial buildout
Ground-based data centers will remain the backbone of AI compute for years to come. The billions of dollars flowing into new facilities from hyperscalers and sovereign wealth funds reflect a bet that terrestrial infrastructure, despite its constraints, is the fastest path to meeting near-term demand. Orbital computing, if it works, would not replace those investments. It would supplement them, potentially serving niche workloads where proximity to space-based sensors or freedom from grid limitations offers a genuine advantage.
Aetherflux’s Galactic Brain program is, as of spring 2026, a company announcement backed by plausible physics and a pair of unreviewed academic papers. That is a legitimate starting point for a technology that could eventually matter. It is not yet proof that AI servers belong in space. The milestones to watch are concrete: regulatory filings, prototype launches, and the first signed customer. Until those arrive, the project sits squarely in the category of high-ambition, high-risk bets that the space industry produces regularly and delivers on rarely.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.