We Ruined Earth. Now It's Time to Ruin the Moon.
AI needs 100× more compute by 2030. Earth is out of room. Space looks like the answer until you run the physics, the economics, and the carbon math nobody wants to have.

This article was sparked by a conversation with Aaron CoggedNCode and Anna | how to boss AI after the news broke about Elon Musk’s plans for space infrastructure. That discussion led me to investigate and this is where it landed.
When Earth Runs Out of Space for the Cloud
The question is no longer whether AI needs more compute. It needs roughly 100 times more by 2030. The question is where that compute physically lives and whether Earth can hold it.
In November 2025, a startup called Starcloud launched a satellite roughly the size of a refrigerator into low Earth orbit aboard a SpaceX rocket. Inside it: an NVIDIA H100 GPU, 100 times more powerful than any processor previously operated in space. Within weeks, it was running Google’s Gemma language model in orbit and returning responses to Earth. The first large language model had left the planet.
It was a proof of concept. But it pointed at something the hyperscalers, Google, Amazon, Microsoft, SpaceX are all circling simultaneously: the possibility that the next generation of AI infrastructure won’t be built on Earth at all.
This is either the most logical next step in computing history or an extraordinarily expensive detour. The physics and the economics tell two different stories. Understanding both requires running the actual numbers not the promotional ones.
Why Earth Is Running Out
To understand why space is even being discussed, you have to understand what an AI data center has become.
Modern AI campuses are not buildings with servers. They are supercomputers the size of small cities, millions of GPUs running in parallel, consuming power in the gigawatt range, requiring entire power plants built next door to keep them operational. The most unglamorous constraint is cooling: these facilities don’t just consume electricity, they generate heat at a scale that requires draining rivers to stop silicon from destroying itself.
Three constraints are converging simultaneously and accelerating.
Power. The electrical grid in most developed markets is effectively sold out for large-scale data center expansion. Permitting new grid connections takes years. Hyperscalers are signing power purchase agreements for nuclear plants that won’t come online until the early 2030s.
Land and permitting. In the US and Europe, new data center construction faces years of environmental review, local opposition, and infrastructure bottlenecks. Speed has become a competitive moat and Earth’s permitting systems were not designed for the pace AI infrastructure requires.
Water. The numbers here are stark. A single 100-megawatt data center can consume approximately 2.5 billion liters of water per year for cooling, equivalent to the annual water use of roughly 80,000 people. Microsoft alone projects its direct data center water consumption will rise approximately 150% between 2020 and 2030, reaching around 18 billion liters per year, before accounting for the cooling water used by power plants generating its electricity, which could roughly triple its total water footprint. Communities near major data center clusters are pushing back. Regulators in California, Germany, and the EU are beginning to treat AI campuses as infrastructure subject to environmental disclosure requirements not just private real estate optimized for cost.
Space, in theory, eliminates all of this. No land limits. No permits. No water required for cooling. Continuous solar exposure without clouds, weather, or night cycles. Unlimited room to expand.
In theory. But space may shift environmental impact rather than erase it. Starcloud and others estimate orbital solar-powered compute could produce up to 10 times lower operational carbon emissions than gas-driven terrestrial data centers. Independent analysis, including work cited in Scientific American, suggests that once launch and re-entry emissions are included, orbital data centers could end up with a higher total climate impact than efficient Earth-based sites. Depending on whose spreadsheet you believe, orbital compute is either a 10× carbon win or an even dirtier way to compute. That question remains genuinely open.
Plan A: Orbital Data Centers
The compute problem is the easy part
Launching GPUs to space is straightforward. The hardware already exists. Starcloud’s demonstration proved that a state-of-the-art AI chip can survive launch, operate in orbit, and return useful inference results.
What’s harder is launching a functioning data center because on Earth, a data center is surrounded by invisible infrastructure that disappears the moment you leave the atmosphere. Power comes from the grid. Cooling comes from air and water systems. Fiber stitches thousands of servers into one interconnected supercomputer. In orbit, all of it must be rebuilt from scratch, and each component introduces its own set of physics problems.
Space is also actively hostile to silicon. High-energy particles, cosmic rays and solar radiation constantly bombard electronics in orbit, flipping bits in memory, corrupting calculations, and gradually degrading transistors. Earth’s atmosphere and magnetic field absorb most of this radiation at ground level. In low Earth orbit, partial protection remains, but it is not forgiving. Chips must be shielded carefully, adding mass. Mass drives launch cost. Launch cost is the variable that dominates every other calculation in this entire discussion.
An early business case hiding in plain sight
Before orbital data centers can compete with terrestrial hyperscalers, there is a narrower, earlier use case that makes economic sense today: processing data that is already generated in space.
Earth-observation satellites, defense imaging constellations, and climate monitoring systems collectively generate petabytes of raw data that must be downlinked to Earth for analysis. The bandwidth cost of that downlink, in time, spectrum, and ground station infrastructure is the bottleneck. An AI processor co-located in orbit near an imaging satellite can analyze, filter, and compress data before it ever reaches Earth, transmitting results rather than raw feeds. The business case is bandwidth saved, not compute delivered.
This is the niche phase before hyperscale. The first economically rational orbital data centers will not be extensions of AWS. They will be inference engines riding alongside imaging constellations, where the economics are driven by what you do not have to send down not by what you can run up there.
Power: the part that actually works
Solar power in orbit outperforms ground-based solar by around 8 to 10 times when averaged over a full day, no clouds, no night, no atmospheric absorption. Continuous exposure to full-spectrum sunlight. For a small satellite like Starcloud-1, this works elegantly, unfold the panels, face the sun, collect energy, charge batteries during eclipse, repeat every 90-minute orbit.
The problem emerges at scale.
For a 40-megawatt orbital data center, Starcloud’s near-term target, equivalent to roughly 20,000 GPUs, the solar array geometry becomes extreme. Ten square meters of panels generates a few kilowatts. Scaling to 40 megawatts requires something in the range of a 350×350 meter array: over 100,000 square meters, approximately 400 tons, delivered to orbit in pieces. Dozens of football fields of solar panels, flying at nearly 8 kilometers per second.
Enormous, but not physically impossible, especially as launch costs fall with Starship-class vehicles. The upside is real: in orbit, solar is effectively free once deployed. The problem is that every watt of power generated immediately creates the article’s central engineering nightmare.
Cooling: where the physics turns brutal
Here is the misconception that has misled more engineers than any other: space is cold, therefore cooling in space should be easy.
Space is cold. Vacuum is an insulator. These are not the same thing.
A GPU does not care about the ambient temperature of space. It cares whether the heat it generates can escape from silicon. On Earth, heat escapes through convection, air molecules collide with hot surfaces and carry energy away and through liquid cooling systems that transfer heat to water. Both are fast, effective, and relatively cheap.
In the near-perfect vacuum of orbit, neither exists. Heat can only leave through radiation, a process governed by the Stefan-Boltzmann Law, which states that radiative heat transfer scales with surface area and temperature to the fourth power. In plain terms: to dump megawatts of waste heat into space without melting your hardware, you need enormous radiator panels. Not large. Enormous.
For a 40-megawatt orbital data center, the math produces radiators in the range of 120,000 square meters of effective surface area roughly 15 to 20 football fields of panels whose sole function is to emit waste heat as infrared radiation into darkness. At standard spacecraft mass estimates, that represents 400 to 800 tons of hardware dedicated entirely to cooling.
At current launch prices of approximately $5,000 per kilogram, launching the cooling system alone costs $2 to 4 billion. Even with Starship reducing launch costs toward $500 per kilogram, launching the radiators for a single 40-megawatt facility runs $200 to 400 million.
And radiators face a problem beyond mass: thermal cycling. In low Earth orbit, a spacecraft circles the planet every 90 minutes. It swings from direct sunlight at 120°C to deep shadow at −170°C, a 300° temperature swing, every orbit, all day, every day, for the facility’s entire operational life. The structural stress this places on materials accumulates continuously. At satellite scale, with a few hundred watts of electronics, this is manageable. At megawatt data center scale, no proven solution currently exists.
Bandwidth: the quiet killer
Even if power and cooling were solved, a data center that cannot communicate with Earth is not a data center. It is, as one engineer put it, a very expensive space heater.
Modern AI clusters exchange data between racks at speeds approaching 1.6 terabits per second. Getting comparable volumes of data down to Earth from orbit requires free-space optical communication, laser beams fired with extreme precision between the satellite and ground stations. In clear conditions, this can reach hundreds of gigabits per second. In orbit between satellites, with no atmosphere to interfere, Starlink’s laser links already demonstrate this.
The bottleneck is the last segment: getting the signal through Earth’s atmosphere. Clouds scatter laser light. Weather disrupts beam alignment. Even atmospheric turbulence distorts precision optics. Phase-array antennas help focus the signal, but the result is still nowhere near the fiber capacity that terrestrial data centers take for granted.
The mismatch is structural: massive compute in orbit, narrow pipelines to Earth. The further from Earth the facility sits, the worse this becomes.
Maintenance: the problem with no solution
On Earth, when a server fails, a technician replaces it. In orbit, there are no technicians.
The only viable maintenance philosophy for orbital hardware is redundancy: launch more capacity than required, so that when components fail, the system routes around them. Software handles most fixes remotely. When hardware truly fails, the loss is accepted and replacement hardware waits for the next available launch window.
This is exactly how Starlink operates not repairing satellites, but replacing them at a steady cadence of approximately two per day across a constellation that numbered over 7,000 satellites by mid-2025. Units de-orbit and burn in the atmosphere. New ones go up. At satellite scale, the economics work.
At data center scale, replacement economics are brutal. Every upgrade is tied to a rocket. Every failure requires a launch window. And the debris question, the thin layer of aluminum oxide particles left in the upper atmosphere by de-orbiting satellites is still being studied by atmospheric scientists. At current Starlink volumes, the effect is unclear. At data center replacement volumes, the scale is different.
The economic summary for orbit
Return to the base case: a 40-megawatt orbital data center. That requires approximately 400 tons of compute, 400 tons of radiators, and 400 tons of solar array infrastructure, over 1,000 tons total.
At today’s launch prices of $5,000 per kilogram, getting that mass to orbit costs at least $5 billion, before accounting for the hardware itself, integration, operations, and bandwidth infrastructure.
At Starship-era prices of $500 per kilogram, the launch cost alone drops to approximately $500 million, still a substantial floor before a single GPU workload runs.
The metric that ultimately matters is watts per dollar of useful compute delivered. At current and near-term projections, orbital data centers do not yet compete with terrestrial alternatives on this metric. They may, as launch costs continue falling and solar panel mass improves. The crossover point has not arrived.
What has arrived is the proof of concept. Physics says it is possible. Economics says not yet.
Plan B: Data Centers on the Moon
If orbital data centers are expensive, lunar data centers are a different category of ambition entirely.
The attraction is theoretical: the Moon has no land limits, no permitting requirements, no political constraints, and surface area enough to scale indefinitely. If the problem with Earth is physical constraint, the Moon removes it entirely.
The engineering checklist immediately becomes more severe.
Radiation: fundamentally harder than orbit
Low Earth orbit receives partial protection from Earth’s magnetic field. The Moon has no magnetic field and no atmosphere. Cosmic rays and solar radiation hit the surface directly, with nothing between the particles and the silicon.
During a solar storm, a massive cloud of charged particles that can last hours or days, the Moon’s surface receives direct bombardment. A standard GPU could fail catastrophically after a major solar storm event, and without hardening and shielding, operational lifetimes would be severely limited. Radiation-hardened chips exist, but they cost 10 to 100 times more than commercial chips and run significantly slower. They also use more conservative semiconductor processes, 7-nanometer nodes rather than cutting-edge 2-nanometer because the smallest transistors are the most fragile. A single high-energy particle striking a 2nm transistor can knock out an entire computing core.
The Moon also introduces lunar dust: fine, abrasive, electrostatically charged particles that cling to every surface, coat radiator panels, interfere with moving parts, and degrade hardware over months. No Earth-based simulation has fully replicated its long-term effects.
Power: the 14-day problem
The Moon rotates once every 29 Earth days, approximately 14 days of continuous sunlight followed by 14 days of complete darkness.
During the dark period, solar power drops to zero. Powering a data center through 14 consecutive days of darkness requires either massive battery storage, which adds mass, which adds launch cost or nuclear reactors deployed on the lunar surface, which introduces regulatory, safety, and operational complexity of an entirely different order.
Locations near the lunar poles receive more continuous sunlight in permanently lit crater rims, but the surface area of those locations is limited, and the transmission infrastructure to move power from rim to facility adds additional engineering requirements.
Latency: the physics ceiling
The Moon sits approximately 384,000 kilometers from Earth. Light takes 1.3 seconds to cross that distance one way. Round-trip latency, the minimum time between sending a request and receiving a response is approximately 2.6 seconds under perfect conditions.
For real-time AI inference the applications that constitute the majority of commercial AI compute demand, 2.6 seconds of irreducible latency is a disqualifying constraint. No engineering advancement changes this. It is a physics ceiling, not an engineering problem.
What the Moon’s latency profile does support is cold storage: large archival datasets and batch processing workloads where results can be computed over hours and transmitted back. Lonestar Data Holdings is pursuing exactly this use case, positioning lunar data centers as off-planet backup for Earth’s most critical data. Their first test mission launched in March 2025, though the landing was unsuccessful. The direction is clear even if the timeline has shifted.
The lunar economic floor
Getting mass to low Earth orbit costs approximately $5,000 per kilogram at current prices. Getting mass to the lunar surface costs on the order of ten to twenty times more per kilogram than low Earth orbit because soft landing is technically demanding, not guaranteed, and requires additional propulsion mass that compounds against itself.
Every problem that makes orbital data centers expensive becomes more expensive on the Moon. Every unsolved engineering challenge in orbit must be solved again in a harsher environment, further from any supply chain, with longer communication delays, and at greater launch cost.
The only scenario where lunar data centers make economic sense is one where the Moon develops its own economic ecosystem, local manufacturing, materials extraction from lunar regolith, in-situ resource utilization that eliminates the dependency on Earth launches for each component and upgrade. That scenario is measured in decades, not years.
Madness or Inevitability?
The honest summary of where this stands in 2026:
Orbit: the physics is solved in principle. Starcloud has demonstrated that high-performance GPU compute can operate in space, run real AI workloads, and return useful results. The engineering challenges around cooling, bandwidth, and maintenance are severe but not theoretically insurmountable. The economics have not yet crossed the threshold where orbital compute competes with terrestrial alternatives on cost per useful watt. The crossover requires continued launch cost reduction, improvement in solar panel mass efficiency, and advances in radiative cooling, all of which are in active development. Missing two or three engineering breakthroughs, not twenty.
The Moon: the physics adds fundamental constraints that orbit does not have. The radiation environment is harsher. The power problem is more severe. The latency floor is determined by the speed of light and cannot be engineered away. The economics are an order of magnitude more challenging. The viable near-term use case is limited to archival storage and batch processing, a real use case, but a narrow one relative to the ambition.
What is not in doubt is the direction. Jeff Bezos has stated publicly that gigawatt data centers in space will exist over the next couple of decades. Eric Schmidt backed Relativity Space with an eye toward orbital compute infrastructure. Google’s Project Suncatcher is placing tensor processing units on solar-powered orbital satellites. Aetherflux plans an orbital data center satellite by 2027. NASA issued a request for information in December 2025 seeking AI systems capable of Earth-independent space operations.
The question being asked is not whether compute moves to space. It is when the economics and engineering cross the threshold and which companies will own that infrastructure when they do.
The answer to that question will shape who controls the next layer of AI infrastructure. And the next layer of AI infrastructure will shape nearly everything else.
Elon Musk has a gift for making his ambitions feel like everyone’s urgency. Nations are mobilizing. Budgets are shifting. The race is on whether the economics justify it or not.
The question nobody is asking loudly enough: do we have to enter a race we didn’t vote for, toward a destination we can’t yet afford, with environmental costs we haven’t finished counting?
The cloud is heading to the clouds. The spreadsheet just hasn’t caught up to the ambition yet.
One deliberate omission: this article does not discuss the Van Allen radiation belts : the zones of magnetically trapped charged particles surrounding Earth that present significant additional radiation hazards for electronics passing through them. Depending on orbital altitude and trajectory, the Van Allen belts could have direct implications for the viability and hardware design of orbital data centers. That dimension deserves its own investigation.
Key figures: Starcloud-1 launched November 2, 2025 with an NVIDIA H100 GPU on a SpaceX Falcon 9 (CNBC, NVIDIA, Starcloud). First LLM inference run in orbit, December 2025 (Starcloud). Starcloud-2 planned October 2026 (Starcloud). Lonestar Data Holdings lunar test mission launched March 2025, landing unsuccessful (Reuters, Lonestar). Google Project Suncatcher announced November 4, 2025 (Forbes, Google). Launch cost estimates: ~$5,000/kg current, ~$500/kg projected Starship commercial pricing (SpaceX, industry analyses). Lunar surface delivery: estimated 10–20× LEO cost per kilogram depending on mission architecture and launcher (lunar mission studies).
https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html
Sources: Starcloud, NVIDIA, SpaceX, CNBC, Reuters, Lonestar Data Holdings, Google/Alphabet, Forbes, NASA, EESI, Microsoft reporting, Scientific American, data-center and space-solar technical analyses
https://blogs.nvidia.com/blog/starcloud/



Moving AI to space feels more like an 'accountability detour.' We’re letting a few companies own the 'visionary future' while the rest of us are left to deal with the brutal physics of orbital pollution and resource extraction. Another case to ask who is actually responsible for the mess we're planning to make in the stars. Thanks for writing this, Farida!
Love this breakdown! SatDevOps is definitely a different beast, thanks for laying out the engineering challenges so clearly! 🙌