
New narrative: Space computing power

The Wall Street Journal reported on December 5 that SpaceX is about to launch a new round of stock issuance, with its valuation expected to soar to a staggering $800 billion. This means the company's valuation has doubled in just five months.
In response to this market rumor, Musk's reply was strategically ambiguous. He denied that the company was raising $800 billion in funding but emphasized SpaceX's sustained positive cash flow and its twice-yearly stock buyback policy, sending a positive signal of financial health to the market.
When discussing the core drivers of valuation, Musk explicitly linked it to the progress of SpaceX's two pillar projects—Starship and Starlink. He particularly noted that securing global radio spectrum rights for satellite-to-device (D2D) communication would be the key to unlocking a trillion-dollar potential market.
This valuation expectation has shaken the capital markets. If realized, SpaceX would not only surpass OpenAI to become the world's top "unicorn" but also its size would rival that of a sovereign tech fund. Even when compared to S&P 500 constituents, SpaceX's valuation would jump to the 13th position, with a market cap exceeding the combined value of the top six U.S. defense contractors like Lockheed Martin and Raytheon, highlighting the status of commercial space as a national-level strategic industry in the eyes of investors.
More importantly, this valuation narrative clearly outlines a grand vision that goes beyond traditional satellite internet:
Orbital computing, or "space-based computing power."
01 Musk's Latest Ambition
Musk revealed that SpaceX is planning to enter the orbital data center sector. The logic addresses a growing bottleneck on Earth: finding cheap, sustainable, and massive power resources to run AI models is becoming increasingly difficult. Thus, space has become the new promised land.
In Musk's vision, deploying massive AI computing units directly in space could become "the fastest and most feasible way to expand computing power" in the next three to four years.
He provided a staggering quantitative projection: If SpaceX can launch megaton-level payloads into low Earth orbit annually, with each satellite carrying about 100 kW of dedicated AI computing power, the annual increase in computing capacity would reach 100 gigawatts (GW)—equivalent to several times the total computing power of hundreds of today's hyperscale data centers.
Although this vision omits many engineering details, its theoretical advantages are highly attractive: Orbital data centers require almost no human maintenance; energy comes from inexhaustible and stable solar power in space; and heat dissipation can be efficiently solved via passive thermal radiation in the near-absolute-zero cosmic background, saving about 40% of the cooling energy consumed by terrestrial data centers.
Moreover, these computing satellites could form an intelligent network via inter-satellite laser links, creating a distributed, dynamically schedulable "orbital AI cloud" that seamlessly integrates with the existing Starlink communication network, building a space-Earth integrated computing-communication infrastructure.
02 The Ultimate Computing Utopia?
Space provides a physical environment for large-scale computing that is difficult to replicate on Earth. Its background temperature is approximately -270°C, close to absolute zero, allowing electronic waste heat to be efficiently radiated into deep space.
In contrast, terrestrial data centers rely on massive air conditioning, chiller, and fan systems for cooling, which typically account for 30% to 40% of total energy consumption. Passive radiative cooling in space requires almost no additional energy. Analyses (e.g., StarCloud forecasts) suggest that the comprehensive energy cost of space-based data centers could drop to one-tenth of terrestrial levels.
Of course, space-based heat dissipation isn't without costs. To efficiently radiate heat from computing chips like GPUs, large-area radiative heat sinks are required. An exascale orbital data center might need several square kilometers of deployed heat dissipation area. This poses epic challenges for satellite structural design, materials engineering, and orbital deployment.
Even considering these factors, space's inherent advantages in heat dissipation remain significant.
The greater energy dividend comes from the sun. In low Earth orbit, solar energy density remains stable at about 1361 W/m², unaffected by atmospheric attenuation, day-night cycles (near-permanent sunlight is achievable via orbital design), or weather. By comparison, even in Earth's most sun-rich desert regions, the annual average effective solar flux is only about one-fifth of that in space.
From an application paradigm perspective, deploying computing power in Earth-orbiting satellite constellations essentially creates a globally covered, low-latency "space-based edge computing platform."
Since satellites are constantly moving, users anywhere on Earth (including traditional network dead zones like oceans and polar regions) can quickly access nearby computing nodes. This means data no longer needs to travel thousands of kilometers via terrestrial fiber optics, potentially reducing end-to-end latency by an order of magnitude, not only eliminating "signal dead zones" but also opening new possibilities for latency-sensitive applications like autonomous driving, remote surgery, immersive metaverse, and high-frequency trading.
Currently, SpaceX, with its reusable rocket technology, dominates global satellite launch capacity, accounting for about 90% of launch mass. As competitors like Blue Origin (New Glenn), Rocket Lab (Electron/Neutron) mature their launch capabilities and China's commercial space sector grows rapidly, the global launch market is entering a new growth cycle. Economies of scale are expected to further reduce per-kilogram launch costs, clearing economic barriers for large-scale computing satellite cluster deployments.
03 Thorny Path to a Glorious Vision
However, transitioning from blueprint to reality, space-based computing faces multiple severe challenges, both technical and regulatory.
Technical feasibility is the first hurdle:
Radiation hardening: Space is filled with high-energy cosmic rays and charged particles that can cause bit flips, latch-ups, or permanent damage to unprotected integrated circuits. While radiation-hardened (Rad-Hard) chips can address reliability, their manufacturing processes often lag consumer-grade chips by generations, with high performance and cost penalties. Balancing commercial high-performance computing hardware (e.g., NVIDIA H100) with necessary radiation protection is a core engineering challenge.
On-orbit maintenance and reliability: Once a satellite fails, manual repair is nearly impossible. This demands extremely high system reliability or modular designs for replacement. Post-deployment, managing end-of-life deorbiting to avoid space debris is another major challenge.
Energy and thermal management scale: As mentioned, gigawatt-scale computing implies gigawatt-level power consumption and waste heat. Designing lightweight, high-efficiency, ultra-large deployable solar arrays and radiators involves complex systems engineering across materials science, structural mechanics, and orbital dynamics.
Network interconnectivity and latency: While inter-satellite laser links offer high bandwidth, dynamic networking, routing optimization, and stable satellite-ground communication—especially under harsh space weather—require extensive validation.
Regulation and governance present another deep challenge:
Spectrum allocation: Satellite-to-device (D2D) and inter-satellite communications require scarce radio spectrum. The ITU's coordination process is lengthy, and national regulations vary. Compatibility and interference coordination with terrestrial 5G/6G networks will involve protracted negotiations.
Orbital and space safety: Low Earth orbit resources are limited. Adding tens of thousands of computing satellites drastically increases collision risks, demanding unprecedented space traffic management (STM). International rules on "deployment rights, collision avoidance, and liability" remain undefined.
Data sovereignty and security: Storing and processing data in a cross-border "space cloud" raises thorny international political and legal issues, including jurisdiction, data privacy (e.g., GDPR), and national security-related data flow regulations.
04 The Race Begins: An Emerging Ecosystem
Despite the challenges, capital and tech giants are already mobilizing. Institutions like Morgan Stanley have begun profiling key players in this emerging field, as an early "space computing" ecosystem takes shape.
Startup pioneers:
Starcloud is SpaceX's most direct potential competitor. In 2024, it secured over $20 million in seed funding and launched its "Starcloud-1" tech demo satellite in November. The satellite carries NVIDIA H100 GPUs and Google's lightweight open-source Gemma model, aiming to train NanoGPT in space. On December 11, the company announced the successful completion of its first in-orbit large language model training, marking a key proof-of-concept milestone.
Axiom Space, leveraging its commercial space station expertise, has included orbital data centers in its roadmap, targeting the launch of its first free-flying nodes by late 2025. Its advantage lies in potentially using future commercial stations as larger, maintainable computing module platforms.
Lonestar Data Holdings has chosen a more imaginative path—lunar data centers. After multiple storage tests on the ISS, it sent a small data storage payload to the Moon in February via Intuitive Machines' lander. Although the mission ended prematurely post-landing, it demonstrated short-term operational feasibility in deep space, with the vision of using the Moon as Earth's "ultimate offshore backup center."
Tech giant initiatives:
Google officially launched its "Suncatcher" project in November 2024, planning to build space-based AI clusters using its custom Tensor Processing Units (TPUs). Its roadmap calls for two prototype satellites in early 2027, with the goal of achieving cost parity with terrestrial computing in the early 2030s as fully reusable rockets slash launch costs.
NVIDIA is deeply involved as a core computing supplier. Its high-performance GPUs (like the space-tested H100) are the hardware of choice. Through programs like "NVIDIA Inception," it's closely collaborating with Starcloud and others to define standard space computing architectures. Moreover, the adaptability of its CUDA ecosystem and full AI software stack to space environments will be foundational for the industry.
Overall, despite growing participation, space computing remains in an early exploratory phase. The sector's extreme technical, capital, and regulatory barriers make intense near-term competition unlikely.
The curtain has lifted on space computing, but this is far from a simple commercial race—it reflects a deeper civilizational logic: As Earth's physical limits become apparent, humanity is drafting infrastructure blueprints among the stars. SpaceX, Google, NVIDIA, and China's space clusters collectively point not just to a boundless computing network, but to another extension of human intelligence and will at cosmic scales.
This article is based on publicly available information and is for informational purposes only, not investment advice.
The copyright of this article belongs to the original author/organization.
The views expressed herein are solely those of the author and do not reflect the stance of the platform. The content is intended for investment reference purposes only and shall not be considered as investment advice. Please contact us if you have any questions or suggestions regarding the content services provided by the platform.

