Skip to content

Google & SpaceX: The Space Data Centre Deal That Changes Everything

M
Marcus Webb
May 15, 2026
11 min read
Business & Money
Google & SpaceX: The Space Data Centre Deal That Changes Everything - Image from the article

Quick Summary

Google is in talks with SpaceX to launch orbital data centres. Here's what the numbers say, what the real obstacles are, and why this matters for investors.

In This Article

Google and SpaceX Are Building the Future — But the Numbers Tell a More Complicated Story

Google is reportedly in talks with SpaceX to secure rocket launch capacity for orbital data centres. On the surface, this sounds like two titans shaking hands on a sci-fi fantasy. Look closer at the numbers, and it becomes one of the most strategically rational moves in the history of enterprise infrastructure — with some very real technical and financial hurdles still standing in the way.

Here is what ambitious investors and technology professionals need to understand: this is not a moonshot press release. This is a calculated, multi-year infrastructure play backed by a 6.1% equity stake, a 2027 prototype programme, and a proven chip architecture that has already survived radiation tests in orbit. The question is not whether space-based compute is coming. The question is how fast the economics get there — and who profits when they do.

Why Google Is Uniquely Positioned for Orbital Computing

Google is not approaching SpaceX as a customer buying a seat on a rocket. It is approaching as an early-stage investor calling in a strategic favour — and that distinction matters enormously.

Google's 6.1% ownership stake in SpaceX dates back to an early funding round. That history creates leverage that no amount of cash can buy in a competitive launch market. When SpaceX is oversubscribed with payload demand — and it increasingly will be — Google is not waiting in the same queue as everyone else.

Beyond the relationship, Google brings three structural advantages to orbital computing that no competitor currently replicates:

  • Tensor Processing Units (TPUs) with proven space hardening. Google's TPUs have already survived 15 kilrads of radiation exposure in orbit — roughly three times the expected five-year cumulative dose. No hard failures. That is not a lab result. That is field data.
  • Over a decade of custom silicon development. Google has been building TPUs since 2015. Most AI labs trying to develop proprietary chips are five to eight years behind. That head start is an enormous moat.
  • Project Suncatcher. Google's 2027 prototype initiative is specifically designed to evaluate whether TPUs can operate at commercial scale in orbit. The programme is already partnered with Planet Labs, which is providing two prototype satellites to run initial feasibility tests.

Compare this to Meta, which has similar free cash flow and comparable debt levels but zero SpaceX exposure, no space data centre roadmap, and no proprietary space-hardened chip programme. The divergence in strategic positioning is significant.

The Core Economics: Why $200 Per Kilogram Is the Magic Number

Everything in the space data centre thesis comes down to launch cost. Currently, SpaceX charges between $250 and $600 per kilogram to orbit. At those rates, commercial orbital computing is not economically viable at scale.

The target that changes the equation is $200 per kilogram — and ideally closer to $50. Here is why that threshold matters:

  • SpaceX has stated ambitions to move one million tonnes of payload to orbit annually
  • At $50 per kilogram, that payload volume represents approximately $1 trillion in annual revenue
  • That same volume could theoretically deliver around 100 gigawatts of compute capacity per year to orbit

Those numbers are aggressive. Even Starship — SpaceX's fully reusable heavy-lift vehicle — is not yet operating on a reliable, cost-optimised cadence. Until it does, the cost-per-kilogram figure stays too high for data centre economics to work.

For context: SpaceX's internal economics need to get to approximately $15 to $30 per kilogram in actual launch cost to support a commercially competitive $50 to $60 per kilogram customer price with healthy margins. That requires years of additional Starship development and iterative cost reduction. It is achievable. It is not imminent.

Key takeaway: The space data centre opportunity is real, but it is a 2030s story dressed up in 2025 announcements. Investors should price it as a long-duration call option, not a near-term revenue catalyst.

The Two Technical Problems Nobody Is Talking About Enough

Continue Reading

Related Guides

Keep exploring this topic

Google & SpaceX: The Space Data Centre Deal That Changes Everything

Radiation Hardening at Scale

Google's TPU surviving 15 kilrads in a small-scale test is genuinely impressive. Scaling that result to a full commercial data centre payload is an entirely different engineering problem. The chips most vulnerable to cosmic radiation — high-bandwidth memory modules from manufacturers like Samsung and SK Hynix — are the same chips currently used in high-performance AI inference and training workloads.

Bit-flip errors, where high-energy cosmic particles flip a binary value from 1 to 0 or trigger electrical anomalies, are manageable at small scale. At gigawatt-scale compute, error correction and hardware redundancy requirements become a serious engineering and cost burden. This is an unsolved problem at commercial scale, and it is one reason why the industry's excitement is currently running ahead of its capabilities.

Heat Rejection: The Problem Space Cannot Solve Alone

This is the most underappreciated obstacle in the entire space data centre conversation. Many people assume that because space is cold — ambient temperatures can reach as low as -269 degrees Celsius — cooling is not a problem. It is precisely the opposite.

Space is a vacuum. There is no convective airflow to carry heat away from chips. The only mechanism for heat rejection in orbit is radiative cooling — essentially large surface-area panels that radiate thermal energy as infrared radiation. The numbers are sobering:

  • For every 1 megawatt of compute, you need approximately 4 tennis courts of radiator surface area
  • For 1 gigawatt of compute, that scales to roughly 4,000 tennis courts of radiator panels
  • Chemical cooling systems using ammonia exist but introduce significant mechanical failure points and toxicity risks

Until there is a compact, reliable, high-capacity heat rejection solution, space data centres will remain constrained to small-scale prototype operations. This is not a funding problem. It is a physics and engineering problem.

SpaceX's Smart Move: Subsidise Space With AI Revenue on the Ground

SpaceX is not waiting for orbital data centres to become economically viable before monetising its compute infrastructure. The deal with Anthropic — supplying 300 megawatts of new computing capacity across more than 220,000 Nvidia GPUs by the end of May 2025 — is a textbook example of using near-term cash flow to fund long-term infrastructure investment.

The timing is sharp. Anthropic's Claude models are currently outperforming competitors in enterprise adoption. Usage has quadrupled at several major organisations. Claude Code adoption is accelerating sharply while competitors like Cursor are showing declining engagement curves. Microsoft is actively working to reduce its OpenAI dependency and route enterprise customers toward Anthropic and Amazon Bedrock.

For SpaceX, partnering with the AI provider that is demonstrably winning enterprise contracts right now — rather than maintaining loyalty to a specific ideological alignment — is the rational capital allocation decision. Terrestrial AI compute generates revenue today. That revenue subsidises Starship development. Starship development reduces launch costs. Lower launch costs make space data centres viable. The flywheel is logical, even if it operates on a longer timeline than most headlines suggest.

Meanwhile, AI energy consumption on terrestrial infrastructure is projected to grow from approximately 8% of US commercial electricity consumption in 2024 to around 20% by 2050. That one-in-five kilowatts figure represents an extraordinary demand signal — and it strengthens the long-term case for moving some portion of compute workloads off the grid entirely.

What This Means for Google's Valuation

At current prices around $374, Google is trading at a KPEG ratio of approximately 1.86 using a four-year forward growth estimate. Run the math: that implies a fair value price target in the range of $574 — roughly 43% upside from current levels.

Free Weekly Newsletter

Enjoying this guide?

Get the best articles like this one delivered to your inbox every week. No spam.

Google & SpaceX: The Space Data Centre Deal That Changes Everything

The $400 level is a meaningful technical resistance point. A confirmed break above it, supported by continued momentum in Gemini adoption, TPU licensing revenue, and SpaceX-related catalysts, could trigger a rapid repricing toward that $574 target.

A few variables to watch closely:

  • Gemini pricing pressure. Google is transitioning from $100–$200 per seat enterprise AI plans to a $24 per seat model under the new AI Expanded Access tier. The bet is on volume over margin. If adoption accelerates, the revenue impact is positive. If uptake is sluggish, it signals deeper competitive pressure from Anthropic and others.
  • SpaceX IPO timing. Google's 6.1% stake is a dormant asset until SpaceX lists publicly. When it does, that equity position will be marked to market on Google's balance sheet, potentially creating a significant one-time value unlock.
  • TPU commercialisation. If Project Suncatcher produces positive 2027 prototype results, it validates an entirely new revenue stream for Google's hardware division that the market is not currently pricing in.

The Practical Investor Takeaway

Space-based data centres are not a 2025 trade. They are a structural infrastructure theme for the 2030s. Here is how to think about positioning now:

  • Google (GOOGL) offers the most direct public-market exposure to this theme — SpaceX equity upside, proprietary TPUs, and a valuation that still implies meaningful discount to fair value.
  • Planet Labs is the near-term operational partner on Google's orbital compute prototypes. Small cap, high risk, but operationally relevant.
  • Rocket Lab provides an alternative launch option if SpaceX costs remain too high and is worth monitoring as a complementary infrastructure play.
  • SpaceX itself remains private. The IPO, when it comes, will likely be the highest-demand technology listing in years. Positioning for that event through existing public holdings — Google above all — is the most accessible strategy today.

The fundamentals are aligning. The timeline is longer than the hype suggests. And the investors who understand both of those things simultaneously are the ones who will be positioned correctly when the economics finally click into place.

Frequently Asked Questions

What is Google's ownership stake in SpaceX, and why does it matter? Google owns approximately 6.1% of SpaceX, making it one of the company's earlier institutional investors. This stake gives Google preferential access and negotiating leverage for launch capacity — a significant structural advantage as demand for rocket payload slots increases. It also means Google holds a large block of pre-IPO equity that will be marked to market when SpaceX eventually lists publicly.

What is Project Suncatcher and when will we know if it works? Project Suncatcher is Google's internal 2027 prototype initiative to test whether its Tensor Processing Units can operate at meaningful scale in orbital environments. Google has partnered with Planet Labs, which is providing two prototype satellites for initial feasibility testing. Results from these early tests will determine whether the programme advances toward commercial-scale deployment.

Why is $200 per kilogram such a critical threshold for space data centres? Below $200 per kilogram in launch cost, the economics of delivering and operating compute hardware in orbit begin to approach commercial viability compared to terrestrial alternatives. SpaceX currently charges $250 to $600 per kilogram depending on the mission profile. The company's long-term target through fully reusable Starship operations is significantly lower — potentially $50 per kilogram at customer pricing — but that requires years of additional development.

How does the Anthropic deal fit into SpaceX's broader space infrastructure strategy? SpaceX's deal to supply Anthropic with 300 megawatts of compute capacity across more than 220,000 Nvidia GPUs generates near-term revenue that can be reinvested into Starship development and launch cost reduction. It is a deliberate strategy to use ground-based AI infrastructure revenue to subsidise the orbital computing roadmap — a logical capital allocation move given that space data centres will not be economically viable at scale for at least several more years.

What are the biggest unsolved technical problems for orbital data centres? Two problems stand above the rest. First, heat rejection: space is a vacuum, so there is no convective cooling. Every megawatt of compute requires approximately four tennis courts of radiative panel surface area to dissipate heat — a significant engineering constraint at gigawatt scale. Second, radiation hardening at scale: while Google's TPUs have survived small-scale radiation testing, ensuring reliable operation of large compute clusters against cosmic radiation and solar flare events remains an unsolved challenge at commercial density.

Frequently Asked Questions

Google and SpaceX Are Building the Future — But the Numbers Tell a More Complicated Story

Google is reportedly in talks with SpaceX to secure rocket launch capacity for orbital data centres. On the surface, this sounds like two titans shaking hands on a sci-fi fantasy. Look closer at the numbers, and it becomes one of the most strategically rational moves in the history of enterprise infrastructure — with some very real technical and financial hurdles still standing in the way.

Here is what ambitious investors and technology professionals need to understand: this is not a moonshot press release. This is a calculated, multi-year infrastructure play backed by a 6.1% equity stake, a 2027 prototype programme, and a proven chip architecture that has already survived radiation tests in orbit. The question is not whether space-based compute is coming. The question is how fast the economics get there — and who profits when they do.

Why Google Is Uniquely Positioned for Orbital Computing

Google is not approaching SpaceX as a customer buying a seat on a rocket. It is approaching as an early-stage investor calling in a strategic favour — and that distinction matters enormously.

Google's 6.1% ownership stake in SpaceX dates back to an early funding round. That history creates leverage that no amount of cash can buy in a competitive launch market. When SpaceX is oversubscribed with payload demand — and it increasingly will be — Google is not waiting in the same queue as everyone else.

Beyond the relationship, Google brings three structural advantages to orbital computing that no competitor currently replicates:

  • Tensor Processing Units (TPUs) with proven space hardening. Google's TPUs have already survived 15 kilrads of radiation exposure in orbit — roughly three times the expected five-year cumulative dose. No hard failures. That is not a lab result. That is field data.
  • Over a decade of custom silicon development. Google has been building TPUs since 2015. Most AI labs trying to develop proprietary chips are five to eight years behind. That head start is an enormous moat.
  • Project Suncatcher. Google's 2027 prototype initiative is specifically designed to evaluate whether TPUs can operate at commercial scale in orbit. The programme is already partnered with Planet Labs, which is providing two prototype satellites to run initial feasibility tests.

Compare this to Meta, which has similar free cash flow and comparable debt levels but zero SpaceX exposure, no space data centre roadmap, and no proprietary space-hardened chip programme. The divergence in strategic positioning is significant.

The Core Economics: Why $200 Per Kilogram Is the Magic Number

Everything in the space data centre thesis comes down to launch cost. Currently, SpaceX charges between $250 and $600 per kilogram to orbit. At those rates, commercial orbital computing is not economically viable at scale.

The target that changes the equation is $200 per kilogram — and ideally closer to $50. Here is why that threshold matters:

  • SpaceX has stated ambitions to move one million tonnes of payload to orbit annually
  • At $50 per kilogram, that payload volume represents approximately $1 trillion in annual revenue
  • That same volume could theoretically deliver around 100 gigawatts of compute capacity per year to orbit

Those numbers are aggressive. Even Starship — SpaceX's fully reusable heavy-lift vehicle — is not yet operating on a reliable, cost-optimised cadence. Until it does, the cost-per-kilogram figure stays too high for data centre economics to work.

For context: SpaceX's internal economics need to get to approximately $15 to $30 per kilogram in actual launch cost to support a commercially competitive $50 to $60 per kilogram customer price with healthy margins. That requires years of additional Starship development and iterative cost reduction. It is achievable. It is not imminent.

Key takeaway: The space data centre opportunity is real, but it is a 2030s story dressed up in 2025 announcements. Investors should price it as a long-duration call option, not a near-term revenue catalyst.

The Two Technical Problems Nobody Is Talking About Enough

Radiation Hardening at Scale

Google's TPU surviving 15 kilrads in a small-scale test is genuinely impressive. Scaling that result to a full commercial data centre payload is an entirely different engineering problem. The chips most vulnerable to cosmic radiation — high-bandwidth memory modules from manufacturers like Samsung and SK Hynix — are the same chips currently used in high-performance AI inference and training workloads.

Bit-flip errors, where high-energy cosmic particles flip a binary value from 1 to 0 or trigger electrical anomalies, are manageable at small scale. At gigawatt-scale compute, error correction and hardware redundancy requirements become a serious engineering and cost burden. This is an unsolved problem at commercial scale, and it is one reason why the industry's excitement is currently running ahead of its capabilities.

Heat Rejection: The Problem Space Cannot Solve Alone

This is the most underappreciated obstacle in the entire space data centre conversation. Many people assume that because space is cold — ambient temperatures can reach as low as -269 degrees Celsius — cooling is not a problem. It is precisely the opposite.

Space is a vacuum. There is no convective airflow to carry heat away from chips. The only mechanism for heat rejection in orbit is radiative cooling — essentially large surface-area panels that radiate thermal energy as infrared radiation. The numbers are sobering:

  • For every 1 megawatt of compute, you need approximately 4 tennis courts of radiator surface area
  • For 1 gigawatt of compute, that scales to roughly 4,000 tennis courts of radiator panels
  • Chemical cooling systems using ammonia exist but introduce significant mechanical failure points and toxicity risks

Until there is a compact, reliable, high-capacity heat rejection solution, space data centres will remain constrained to small-scale prototype operations. This is not a funding problem. It is a physics and engineering problem.

SpaceX's Smart Move: Subsidise Space With AI Revenue on the Ground

SpaceX is not waiting for orbital data centres to become economically viable before monetising its compute infrastructure. The deal with Anthropic — supplying 300 megawatts of new computing capacity across more than 220,000 Nvidia GPUs by the end of May 2025 — is a textbook example of using near-term cash flow to fund long-term infrastructure investment.

The timing is sharp. Anthropic's Claude models are currently outperforming competitors in enterprise adoption. Usage has quadrupled at several major organisations. Claude Code adoption is accelerating sharply while competitors like Cursor are showing declining engagement curves. Microsoft is actively working to reduce its OpenAI dependency and route enterprise customers toward Anthropic and Amazon Bedrock.

For SpaceX, partnering with the AI provider that is demonstrably winning enterprise contracts right now — rather than maintaining loyalty to a specific ideological alignment — is the rational capital allocation decision. Terrestrial AI compute generates revenue today. That revenue subsidises Starship development. Starship development reduces launch costs. Lower launch costs make space data centres viable. The flywheel is logical, even if it operates on a longer timeline than most headlines suggest.

Meanwhile, AI energy consumption on terrestrial infrastructure is projected to grow from approximately 8% of US commercial electricity consumption in 2024 to around 20% by 2050. That one-in-five kilowatts figure represents an extraordinary demand signal — and it strengthens the long-term case for moving some portion of compute workloads off the grid entirely.

What This Means for Google's Valuation

At current prices around $374, Google is trading at a KPEG ratio of approximately 1.86 using a four-year forward growth estimate. Run the math: that implies a fair value price target in the range of $574 — roughly 43% upside from current levels.

The $400 level is a meaningful technical resistance point. A confirmed break above it, supported by continued momentum in Gemini adoption, TPU licensing revenue, and SpaceX-related catalysts, could trigger a rapid repricing toward that $574 target.

A few variables to watch closely:

  • Gemini pricing pressure. Google is transitioning from $100–$200 per seat enterprise AI plans to a $24 per seat model under the new AI Expanded Access tier. The bet is on volume over margin. If adoption accelerates, the revenue impact is positive. If uptake is sluggish, it signals deeper competitive pressure from Anthropic and others.
  • SpaceX IPO timing. Google's 6.1% stake is a dormant asset until SpaceX lists publicly. When it does, that equity position will be marked to market on Google's balance sheet, potentially creating a significant one-time value unlock.
  • TPU commercialisation. If Project Suncatcher produces positive 2027 prototype results, it validates an entirely new revenue stream for Google's hardware division that the market is not currently pricing in.
The Practical Investor Takeaway

Space-based data centres are not a 2025 trade. They are a structural infrastructure theme for the 2030s. Here is how to think about positioning now:

  • Google (GOOGL) offers the most direct public-market exposure to this theme — SpaceX equity upside, proprietary TPUs, and a valuation that still implies meaningful discount to fair value.
  • Planet Labs is the near-term operational partner on Google's orbital compute prototypes. Small cap, high risk, but operationally relevant.
  • Rocket Lab provides an alternative launch option if SpaceX costs remain too high and is worth monitoring as a complementary infrastructure play.
  • SpaceX itself remains private. The IPO, when it comes, will likely be the highest-demand technology listing in years. Positioning for that event through existing public holdings — Google above all — is the most accessible strategy today.

The fundamentals are aligning. The timeline is longer than the hype suggests. And the investors who understand both of those things simultaneously are the ones who will be positioned correctly when the economics finally click into place.

Frequently Asked Questions

What is Google's ownership stake in SpaceX, and why does it matter? Google owns approximately 6.1% of SpaceX, making it one of the company's earlier institutional investors. This stake gives Google preferential access and negotiating leverage for launch capacity — a significant structural advantage as demand for rocket payload slots increases. It also means Google holds a large block of pre-IPO equity that will be marked to market when SpaceX eventually lists publicly.

What is Project Suncatcher and when will we know if it works? Project Suncatcher is Google's internal 2027 prototype initiative to test whether its Tensor Processing Units can operate at meaningful scale in orbital environments. Google has partnered with Planet Labs, which is providing two prototype satellites for initial feasibility testing. Results from these early tests will determine whether the programme advances toward commercial-scale deployment.

Why is $200 per kilogram such a critical threshold for space data centres? Below $200 per kilogram in launch cost, the economics of delivering and operating compute hardware in orbit begin to approach commercial viability compared to terrestrial alternatives. SpaceX currently charges $250 to $600 per kilogram depending on the mission profile. The company's long-term target through fully reusable Starship operations is significantly lower — potentially $50 per kilogram at customer pricing — but that requires years of additional development.

How does the Anthropic deal fit into SpaceX's broader space infrastructure strategy? SpaceX's deal to supply Anthropic with 300 megawatts of compute capacity across more than 220,000 Nvidia GPUs generates near-term revenue that can be reinvested into Starship development and launch cost reduction. It is a deliberate strategy to use ground-based AI infrastructure revenue to subsidise the orbital computing roadmap — a logical capital allocation move given that space data centres will not be economically viable at scale for at least several more years.

What are the biggest unsolved technical problems for orbital data centres? Two problems stand above the rest. First, heat rejection: space is a vacuum, so there is no convective cooling. Every megawatt of compute requires approximately four tennis courts of radiative panel surface area to dissipate heat — a significant engineering constraint at gigawatt scale. Second, radiation hardening at scale: while Google's TPUs have survived small-scale radiation testing, ensuring reliable operation of large compute clusters against cosmic radiation and solar flare events remains an unsolved challenge at commercial density.

Z

About Zeebrain Editorial

Our editorial team is dedicated to providing clear, well-researched, and high-utility content for the modern digital landscape. We focus on accuracy, practicality, and insights that matter.

More from Business & Money

Explore More Categories

Keep browsing by topic and build depth around the subjects you care about most.