
Starlink Launch (Credit: SpaceX)
While we digest plans by ‘big tech’ (multi-national technology companies) to build tens of GW of AI datacentres on Earth, some of its leaders are floating the idea of putting a similar amount in orbit.
Proponents include Elon Musk, Jeff Bezos and Sundar Pichai. AI on satellites (AI-sat) could circumvent earthbound obstacles with energy capacity and planning regulations while reducing latency and presaging next decade’s launch of the AI-native 6G standard. 6G will combine cellular and non-terrestrial wireless networks.
Opponents pose some questions, however: is AI-sat technologically do-able, economically defensible or socially desirable?
Technologically feasible?
The first question has a partial answer. The European Space Agency’s Phi-Sat 1 and 2 and the Satellogic/Palantir AI-First satellites undertake edge machine-learning (ML) analysis and filtering for weather and other computer vision applications today. Meanwhile, a rich ecosystem exists of radiation-hardened, space-qualified off-the-shelf components – for example, microprocessors, microcontrollers, ADCs and FPGAs.
However, orbital datacentres scale things enormously. Google published a white paper addressing the challenge when it announced Project Suncatcher in November 2025.
After encouraging lab tests, it aims to launch two prototype datacentre satellites in 2027 with partner Planet Labs to further validate some of the most important requirements.
First and foremost is the requirement for a combination of power generation and thermal management, specifically the size, capacity and integrity of the onboard solar panels and the radiators that will be needed to carry away the heat generated by AI chips. In space there is no air to do this; almost all waste heat must be radiated. Power density quickly turns into radiator area, mass and attitude-control complexity.
Then there are the Tensor processors (or GPUs) and the high-bandwidth memories on which they depend. AI SoCs fabricated at lower process nodes have large die sizes. These represent attack surfaces that are harder to mitigate against space damage.
Threats include total ionising dose (a radiation-based accumulation of charge across insulating layers that leads to degradation) and single-event effects (where single energetic particles intrude and flip bits, latch up circuits or cause permanent damage).

Shoebox sized Phi-Sat cubesats already incorporating AI functions (Credit: ESA)
Finally, for the prototypes, Google intends to research inter-satellite links. Terrestrial datacentres aim for 10Tbps but current satellite rates are 100Gbps at best. The company thinks such a boost requires dense wavelength division multiplexing and optimised spatial multiplexing, achieved by keeping distances between individual satellites low (100-200m for the trial). Downlink speeds also need to improve.
That last goal highlights what the prototypes cannot fully address.
Google proposes a constellation of 81 satellites within a 1km radius at a mean sun-synchronous low-Earth orbit altitude of 650km.
This places them in the 400-650km range that is one of the most crowded (and debris-strewn) zones above our planet. Precise cluster control will be vital.
Next comes the economics. . .
Defensible economically
Suncatcher depends on a factor that Google cannot greatly influence: the launch cost per kilogram. This has fallen thanks to Musk’s SpaceX, with the prospect that incoming competition from Bezos’ Blue Origin/Amazon Leo (formerly Project Kuiper) and others will drive prices down still further. Google believes those need to drop from an estimated $3,600/kg (£2,700) today to just $200/kg.

Jeff Bezos giving a thumbs-up (Credit: Blue Origin)
The benchmark is energy. Earthbound datacentres have a wide cost range of $570-$3,000 per kW per year, explained in part by a site’s ability to tap into existing energy or the need to build its own. Today’s launch costs would exceed even that, according to Google’s estimate, at $14,700/kW per year. At 200/kg it falls to $810/kW per year.
Musk has estimated launch costs could be viable within three, four or five years, but Google says 2035 is more realistic.
Another factor is maintenance and upgrades. Satellites typically operate for 15 years, but AI processors face obsolescence within three to five years. Startups, including Nvidia Inception-supported Starcloud, promote a modular design that allows the satellite equivalent of server racks to be attached or detached, but this awaits use in anger. In-orbit servicing and upgrades require autonomous docking and robotic systems that remain unproven for this application.
Even if modular maintenance became possible, there are concerns about committing enormous capital to orbital datacentres that do not have even a man and a dog (or should that be a Gagarin and a Laika?) to physically oversee them.
Then, there are the politics…
Desirable socially
The ability to build sovereign AI datacentres in space has attractions for civilian and military use: civilian, where land and energy are constrained; military, given AI’s importance to national security.
A huge issue surrounds the issue of who will have jurisdiction over AI in space. Amid growing wariness of a ‘big tech’ oligarchy, the charge that it faces insufficient oversight is already climbing the geopolitical agenda.
AI-sat potentially can be done at a price its promoters will pay, eventually. What price civil society is willing to pay is another question, one that may itself take a decade to resolve.
Electronics Weekly