Computing Gravity: From Earth to Orbit
I was at a student science exhibit at my kid’s school and someone did a project on comparing satellite-gathered information on electromagnetic wavelengths of certain frequencies which correlate to certain health indicators of crops. Monitoring this over periods of time can help farmers get better insight into how to take better care of their farms. Here’s an instance where it might make sense to do some of the processing closer to the data input, in space, and only send relevant finalized results to earth when needed.
This reminded me of an article I caught about China sending satellites into space to develop a super computer network in orbit around the earth. So now, we have local computing, cloud computing, mobile computing, edge (CDN) computing, and now space computing??! I can see some interesting use cases. If I wanted to spy on someone and took satellite photography and wanted to do facial recognition, it would be really slow and inefficient to send all the raw images from satellite down to earth. Why not run some of this AI workload in space then the final data can be transmitted much more efficiently. Makes a lot of sense to me.

Watching all these examples, I started thinking about this like computational gravity. Just like water flows downhill, computing naturally flows toward the path of least resistance - and that’s rarely the most powerful hardware. Instead, it’s wherever latency, bandwidth costs, and processing power find their sweet spot. Space computing is just the extreme case where distance creates such a massive “computational gravity well” that it’s actually worth launching processors into orbit rather than beaming raw data across the void.
All of these various efforts of putting compute local or at the edge or wherever, are typically a matter of trade-offs. You want performance, but you also want security and you want the lower cost of networked infrastructure. You can’t have all of these for every application or use case. For the space compute use case, the point of capture is so distant from the point of usage, it makes sense to move the computational processing to space. In pre-internet days, networked computers were running in a very strict client-server architecture; this was when the phrase “dumb terminal” was coined. Now, cloud computing is the norm and the browser can be seen as the dumb terminal, but there are many cases where it is not used, one of which is security.
Apple recently has been making efforts to include running local AI on its iPhones and the quality today just doesn’t compare to what can be run on a cloud server. This was specifically for a demo of local genfill on a photo and the results were not good. If security is truly the issue, quality was heavily sacrificed. There may be other motives at play — making security a primary reason for local compute means they can sell upgraded phones every year.
Another example with Waymo taxis. Sometimes, milliseconds can be the difference between life and death. With autonomous driving becoming more and more prevalent, it makes sense that critical compute and data needed for decision-making is available in the fastest, closest manner possible. The computational gravity here is so strong that relying on a network connection to compute life-or-death decisions may not be the best architecture.
In the end, computing isn’t just moving to the cloud. It’s flowing wherever it makes the most sense. From smartphones in our pockets to servers in orbit, the gravitational pull of latency, bandwidth, power, and security shapes where and how computation happens. The smartest systems today aren’t just the fastest, they’re the most strategically placed.
Whether it’s AI running in a farm drone, facial recognition on a satellite, or life-critical code in a self-driving car, we’re no longer designing for a single center of gravity. We’re designing for a constellation of them. The future of computing isn’t centralized or decentralized — it’s situational.
Friday June 6, 2025