Why Google and SpaceX Are Taking AI Infrastructure to Space
As the AI boom accelerates, the biggest bottleneck is unlikely to be the availability of chips or components. It will be electricity, cooling, and physical space. On Earth, data centers are running headlong into grid congestion, power shortages, water constraints, and permitting delays that can stretch for years. In response, a radical idea is moving from theory to serious experimentation: putting data centers in space. Far from science fiction, orbital data centers are an attempt to escape the hard physical limits now throttling AI infrastructure, by moving compute to an environment where power is abundant, cooling is passive, and scale is unconstrained.

Image by WikiImages from Pixabay
Several companies are betting on this space including SpaceX, Alphabet’s (NASDAQ:GOOG) Google, Axiom Space, OrbitsEdge, and Nvidia (NASDAQ:NVDA) backed Starcloud. The appeal becomes clearer once you examine the economics and physics that increasingly constrain AI infrastructure on Earth.
Why There’s Interest In Sending Data Centers Skywards?
-
Scale and power without limits: With no land constraints or dependence on crowded power grids, fleets of satellites can run on near-constant solar energy, supporting extremely large compute systems without the risk of grid outages.
-
Higher server density: Passive cooling in space removes many heat limits, allowing AI chips to be packed more tightly and run more efficiently than in traditional Earth-based data centers.
-
Built-for-space networking and processing: Laser links between satellites enable fast communication across large clusters, while processing data in orbit avoids slow and costly data downloads to Earth.
Structural Limits
-
Latency to Earth: The time it takes signals to travel to and from space makes orbital compute unsuitable for real-time apps, trading systems, or other delay-sensitive uses.
-
Radiation damage: Continuous exposure to space radiation causes errors and gradually damages chips, shortening their useful life.
-
No repairs or upgrades: Hardware failures cannot be fixed in orbit, and systems cannot be upgraded, causing technology to fall behind rapidly compared to constantly refreshed ground data centers.
Google and SpaceX
The orbital data center thesis depends on two distinct but complementary capabilities: validating compute in space and making deployment economically viable. While several companies are betting on this space, Google and SpaceX sit at the center of the thesis, controlling the two most critical levers needed to make orbital computing viable.
Google’s Project Suncatcher, targeted for early prototype launches around 2027, Google aims to test how standard TPU accelerators behave in orbit. The focus is not peak performance or commercialization, but survivability. Key things Google intends to validate include radiation tolerance, thermal stability, fault management, and optical networking in a space environment. Google is partnering with Planet Labs to develop and deploy two prototype satellites.
Google’s advantage lies in its end-to-end control of the AI stack. It designs its own accelerators, operates distributed systems that already assume constant hardware failure, and has deep experience extracting reliability from imperfect components at massive scale.
SpaceX addresses the other side of the problem: economics.
Historically, launch costs made large-scale orbital computing impractical. SpaceX’s fully reusable Starship is designed to collapse those costs, enabling the deployment of hundreds or thousands of compute-capable satellites rather than a handful of experiments. SpaceX also operates Starlink, a global, high-bandwidth satellite network that orbital data centers could integrate with directly instead of building new communications infrastructure from scratch.
Combined with unmatched launch volume, manufacturing scale, and experience operating large constellations, SpaceX will likely be the economic gatekeeper for whether orbital compute will move toward commercialization.
Key Proof Points To Watch
For investors eyeing orbital data centers, this is a classic high-risk, high-reward moonshot with a potentially decade-plus horizon. To be sure, near-term progress will be incremental, but these could be some key developments worth looking out for in the coming years.
-
Google’s Project Suncatcher: A key milestone will be whether Google can successfully launch and operate its first space-based AI compute prototypes with Planet Labs, proving that commercial AI chips can survive and perform reliably in orbit.
-
AI compute inside Starlink: Another signal to watch is SpaceX integrating more onboard AI processing into future Starlink satellites, showing that meaningful computing can be done directly in space rather than only on the ground.
-
Falling launch costs: Today, launching payloads to orbit still costs thousands of dollars per kilogram. SpaceX’s Falcon 9 typically runs around $1,600 to $2,000 per kg. SpaceX believes that fully reusable Starship rockets could push those costs below $200 per kg by the 2030s, which would be a major unlock for large-scale orbital data centers.
The Trefis High Quality (HQ) Portfolio, with a collection of 30 stocks, has a track record of comfortably outperforming its benchmark that includes all three – the S&P 500, S&P mid-cap, and Russell 2000 indices. Why is that? As a group, HQ Portfolio stocks provided better returns with less risk versus the benchmark index; less of a roller-coaster ride, as evident in HQ Portfolio performance metrics.
Invest with Trefis Market-Beating Portfolios
See all Trefis Price Estimates