For decades, “the cloud” has been a clever marketing term. Your data isn’t floating anywhere — it’s sitting in a concrete building in Virginia or Oregon, consuming megawatts of electricity and millions of gallons of cooling water. But in 2026, for the first time, the cloud might actually become orbital. Both the United States and China are racing to build AI computing infrastructure in space — solar-powered server farms circling the Earth. The metaphor is becoming physical reality, and the stakes are enormous.
Why Space? The Power and Cooling Crisis
The obvious question is: why would anyone launch a data center into orbit when you could just build one on the ground? The answer comes down to the two biggest headaches in AI infrastructure: power and cooling.
Training large AI models requires staggering amounts of electricity. A single large data center campus can consume as much power as a small city. Globally, there’s a scramble for grid capacity. New data centers are being delayed because the electrical grid literally cannot supply enough power. Some regions have imposed moratoriums on new construction. Meanwhile, cooling is the other nightmare — many facilities use millions of gallons of fresh water for evaporative cooling towers, creating conflicts with local communities over water rights and environmental impact.
Space solves both problems elegantly. In orbit, you get near-continuous sunlight — no clouds, no nighttime if you choose the right orbital configuration. Solar panels in space generate roughly five times more energy per panel than ground installations because there’s no atmosphere filtering the light. And for cooling, the vacuum of space is essentially an infinite heat sink. You radiate waste heat as infrared energy directly into the void. No water needed, no evaporation towers, no environmental footprint.
Elon Musk crystallized this at Davos in January 2026: “It’s a no-brainer building solar-power data centers in space. The lowest-cost place to put AI will be space, and that will be true within two years, three at the latest.”
Two to three years is characteristically ambitious for Musk. But the underlying physics genuinely checks out. The question isn’t whether space-based compute makes theoretical sense — it’s whether we can engineer it affordably and reliably.
The Players: From Startups to Superpowers
An entire ecosystem is forming around the concept of orbital computing. The players range from tech giants to venture-backed startups to national governments.
Google’s Project Suncatcher, announced in late 2025, represents the most systematic corporate approach. Google is partnering with Planet Labs to launch two prototype satellites by early 2027, each carrying Google’s own TPU chips — the same custom AI processors they use in terrestrial data centers. To validate the hardware for space, they blasted the TPUs with a proton beam in a lab to simulate orbital radiation. The chips survived nearly three times the dose they’d encounter in actual orbit.
The long-term vision is breathtaking: constellations of 81 satellites arranged in kilometer-wide arrays, connected to each other by laser communication links. Google’s own research paper estimates that launch costs need to drop below $200 per kilogram by the mid-2030s to make space data centers cost-competitive with terrestrial facilities. Current costs are roughly seven to eight times higher.
Starcloud (formerly Lumen Orbit), a Y Combinator startup out of Redmond, Washington, has already made history. In December 2025, they trained the first AI model ever in orbit. They sent up a 60-kilogram satellite — about the size of a small refrigerator — equipped with an NVIDIA H100 GPU. They loaded Google’s Gemma language model onto the satellite and were querying it from Earth. CEO Philip Johnston said it represented 100 times more powerful GPU compute than anything previously operated in space. Their ultimate goal? A 5-gigawatt orbital data center with solar and cooling panels roughly four kilometers across — larger than most neighborhoods.
SpaceX itself is entering the game. Musk confirmed in October 2025 that SpaceX “will be doing” data centers in space, suggesting that next-generation Starlink satellites could be scaled up for compute workloads. The proposed SpaceX-xAI merger — reported in late January 2026 — would combine rockets, Starlink’s satellite network, and xAI’s Grok chatbot under one corporate roof, creating the full vertical stack: the rockets to launch it, the satellite network to connect it, and the AI to run on it. SpaceX is targeting a massive IPO potentially valued above a trillion dollars, with orbital AI infrastructure reportedly a key component of the investor pitch.
Jeff Bezos and Blue Origin are exploring the space as well. Bezos has predicted that space facilities will outperform Earth-based servers within decades.
China’s “Space Cloud” and the Geopolitical Dimension
The race isn’t just commercial — it’s geopolitical. On January 29, 2026, Reuters reported that CASC (China Aerospace Science and Technology Corporation), China’s main space contractor, released a five-year development plan with jaw-dropping ambitions.
CASC pledged to “construct gigawatt-class space digital-intelligence infrastructure.” A gigawatt is roughly the output of a large nuclear power plant. They’re talking about nuclear-plant levels of solar power generation in orbit, dedicated to AI compute. The plan calls for a full “Space Cloud” by 2030 that integrates cloud, edge, and terminal computing capabilities — processing Earth’s data directly from orbit.
This isn’t aspirational marketing. The CASC policy document from December identifies orbital computing as a core pillar of China’s upcoming 15th Five-Year Plan — the binding national economic roadmap that drives government investment and industrial policy.
The plan goes further. CASC announced it will achieve operational suborbital space tourism within five years, then gradually develop orbital tourism. China also inaugurated its first School of Interstellar Navigation at the Chinese Academy of Sciences, covering interstellar propulsion and deep-space navigation. Xinhua, the state news agency, wrote that “the next 10 to 20 years will be a window for leapfrog development in China’s interstellar navigation field.”
The geopolitical context adds urgency. China’s key bottleneck is reusable rockets. SpaceX’s Falcon 9 achieves routine reusability — landing and relaunching boosters sometimes within days — which is how Starlink achieved near-monopoly status in low Earth orbit satellites. China has not yet completed a successful reusable rocket test. Their Long March 12A recovery attempt failed in December 2025. China achieved 93 space launches in 2025 — a national record — but none with recovered first stages.
Reusability is everything for cost. Without it, every satellite launch is dramatically more expensive. China has the ambition and government backing, but the U.S. currently holds the engineering lead on the delivery mechanism. That said, Chinese commercial startups like LandSpace are maturing rapidly and targeting rocket recovery by mid-2026. The gap could close faster than expected.
The Hard Problems Nobody’s Solved Yet
The physics of space-based computing is compelling. The engineering, however, presents formidable unsolved challenges.
Thermal management is counterintuitively difficult. Yes, space is cold — but there’s no air to carry heat away through convection. All cooling must happen through radiative panels that emit infrared energy. NASA studies show these radiators can account for over 40% of a spacecraft’s total power system mass at high power levels. The cooling system might literally weigh more than the computers it’s protecting. Designing compact systems that keep dense AI chips within safe operating temperatures remains one of the hardest unsolved problems.
Maintenance is essentially impossible in the traditional sense. On Earth, data center technicians swap failed drives, upgrade components, and patch hardware constantly. In orbit, every hardware failure is potentially a write-off. You’d need robotic servicing missions or entirely new satellite launches to fix anything. Autonomous repair capabilities don’t exist at the required scale.
Networking presents enormous challenges. Modern AI training requires high-bandwidth, low-latency interconnects between processing nodes. Replicating this between satellites requires laser communication links operating at multi-terabit capacity between objects zooming past each other at thousands of kilometers per hour. Maintaining a laser link between two objects both moving at orbital velocity requires extraordinary precision. Orbital drift, alignment maintenance, and weather disruptions for ground-space links all degrade performance.
Radiation threatens the electronics themselves. Google’s proton beam tests are encouraging — their TPUs survived three times the expected orbital dose. But that’s a controlled lab test. Maintaining reliable performance for years through solar storms, cosmic ray events, and cumulative radiation damage is a completely different challenge. A space-based data center cannot afford the random bit-flip errors that radiation causes in sensitive AI computations.
And then there’s economics. The entire business case hinges on dramatically reduced launch costs. SpaceX’s Starship is the linchpin. If Starship achieves its promised cost per kilogram — potentially under $100/kg — the economics flip and orbital computing becomes competitive. If it doesn’t, or if there are significant delays, space data centers remain a science experiment rather than a viable business.
The Profound Implications
The strategic implications extend far beyond computing efficiency. Whoever controls orbital compute infrastructure could gain enormous advantages — not just in AI development, but in Earth observation, communications, and even military intelligence. The U.S.-China space race used to be about prestige and moon landings. Now it’s about server racks and solar panels.
For the environment, the calculus is complex. Terrestrial data centers consume vast amounts of water and strain electrical grids, often powered by fossil fuels. Orbital data centers eliminate water use and run on pure solar energy, but the environmental cost of frequent rocket launches introduces its own footprint. The net calculation depends heavily on launch frequency and rocket reusability.
The trajectory is clear. Prototype missions are already flying — Starcloud has trained an AI model in orbit. Google’s Suncatcher satellites target 2027. Both superpowers are committing serious resources. Industrial-scale orbital data centers replacing terrestrial ones is probably a decade-plus timeline, but the first steps are being taken right now.
The next time someone tells you their data is “in the cloud,” it might be worth asking: which orbit?
Low Earth, most likely. About 550 kilometers up, completing one lap every 90 minutes.