Can the new NVIDIA Jetson Thor lead a New Era of Edge Computing in Space?
- Gaurav Bajaj
- Oct 1
- 9 min read
By Gaurav Bajaj (CTO)
The space industry is entering a new era where edge computing in orbit is no longer optional; it’s essential. From rapid tasking to real-time intelligence delivery, the demand for more powerful Data Processing Units (DPUs) - specialized payload data processing computers onboard satellites - continues to rise. In the world of NVIDIA (I will discuss the XILINX family in another post), the NVIDIA Jetson AGX Orin has become the go-to processor for advanced in-orbit AI applications. It has enabled companies like us, Little Place Labs (LPL), to push the boundaries of on-orbit image processing and intelligence generation.
But a new player has just arrived: Welcome, NVIDIA Jetson Thor! As one of the first industry voice to highlight this revolutionary new processor, we believe Jetson Thor has the potential to become a foundation for next-generation satellite constellations and Edge AI Data Processing Units (DPUs).
At Little Place Labs, we don’t build space hardware or Data Processing Units (DPUs) ourselves. Our core expertise lies in developing hardware-agnostic Space Edge applications that can adapt to a variety of onboard platforms. That said, we do value and prefer deploying on capable, high-performance hardware, especially processors that allow our Orbitfy software to operate at its full potential. The better the onboard infrastructure, the more value we can deliver to our customers through near-real-time intelligence and autonomous in-orbit processing.
What is Jetson Thor and Why Does It Matter to Space?
When NVIDIA released the Jetson AGX Thor in August 2025, my immediate reaction was simple: this changes everything for space computing. Built on the new Blackwell GPU architecture, Thor delivers a staggering 7.5x increase in AI compute, up to 2,070 FP4 TeraFLOPS, and is 3.5 times more energy-efficient than its predecessor, the Jetson AGX Orin. On top of that, it carries 128 GB of LPDDR5X memory and operates across a 40 to 130 watt power envelope, making it capable of sustaining multiple real-time AI workflows directly on edge devices.
For space, this isn’t just a spec sheet upgrade; it’s a generational leap. A Thor-powered satellite can run large computer vision models, vision transformers, and multi-modal AI pipelines entirely in orbit, without leaning on ground stations for inference. That means faster decisions, more autonomy, and constellations that are no longer bottlenecked by downlink capacity or ground processing delays.

Now, when I evaluate any new processor for orbit, I boil it down to one simple metric: how much compute do I get per watt? In orbit, every watt is precious, and Thor fundamentally shifts the curve. Not only does it multiply GPU throughput, it also boosts CPU performance, increases memory bandwidth, and adds faster I/O, all of which directly translate into more value per watt spent in space.
Performance Comparison: Jetson Thor vs Jetson AGX Orin
Feature | NVIDIA Jetson AGX Thor | Better By | NVIDIA Jetson AGX Orin |
AI Compute Performance | 2,070 TFLOPS (FP4)1035 TFLOPS (FP8/INT8) | 7.5x Orin | 275 TOPS (INT8) |
CPU Performance | 12-core ARMCortex-A78AE | 2.6x | 14-core ARM Neoverse-based NVIDIA Grace CPU |
Precision (Most Common) | FP8 and FP4 (native) | - | INT8 (native) and FP16 |
Precision (In Use) | Better suited for modern quantized models with FP8/FP4 support | - | Strong performance with established FP16/INT8 workflows |
Memory | 128 GB LPDDR5X | 2x | 64 GB LPDDR5 |
Power Envelope | 40–130 Watts | 2x | 15-60 Watts |
Transistors | ~200 Billion | 12x | 17 Billion |
GPU Architecture | NVIDIA Blackwell | - | NVIDIA Ampere |
Key Use Cases | Embedded, Robotics, Edge AI(Dynamic Compute Allocation: Flexibly partitions 2000 TOPS for separate applications) | - | Embedded AI, robotics |
The leap from AGX Orin’s 275 TOPS to AGX Thor’s over 2,000 TFLOPS ushers in a new class of high-performing edge computers for space, capable of supporting far more sophisticated AI and signal processing models onboard spacecraft. I see Thor as the first processor that truly allows spacecraft to handle workloads that, until now, were unimaginable within the tight power and thermal budgets of orbit.
With Jetson Thor:
Classical computer vision pipelines: the bread and butter of Earth observation, run three to four times faster.
SAR and hyperspectral fusion workloads, which have always been constrained by memory and bandwidth, could achieve up to 5× more throughput.
And most exciting to me, multi-modal AI tasks such as vision-language models achieve five to seven times the performance. This is the unlock that makes mission concepts possible that we couldn’t have entertained with Orin, things like real-time event detection, adaptive re-tasking, and autonomous cross-sensor reasoning, all happening directly on orbit.

The “Three-Body” Constellation: Setting the Benchmark, Or Not
China’s ambitious “Three-Body Computing Constellation” aims to create a distributed AI supercomputer in orbit, composed of up to 2,800 satellites, each equipped with high-performance AI compute. This constellation delivers a total of 1,000 peta operations per second (POPS), currently operating at 5 POPS with just 12 satellites during its initial phase. (Source: Space News) This implies the rough equivalent compute of 2 Orins on each satellite of Three-Body.
The announcement made big headlines, but I found it somewhat unsettling. The coverage felt more like PR than substance. I don’t doubt the ambition, but I struggle to see why we should call 2,800 satellites running relatively modest compute per node a “supercomputer.” If raw POPS in orbit were the only metric, there are better, cheaper, and more efficient ways to deploy compute in space.
That said, the intent is clear: these satellites are expected to process massive amounts of data onboard, linking together via high-speed optical inter-satellite links to create a global AI compute fabric in orbit capable of running real-time inference and scaling large AI models, parallel to terrestrial supercomputing infrastructure.
What If - Jetson Thor Powered The “Three-Body” Constellation?
To appreciate the disruptive potential of Jetson Thor in space, let’s reframe the numbers behind China’s Three-Body Constellation. The constellation is designed to reach 1,000 POPS with 2,800 satellites. In its initial phase, 12 satellites reportedly deliver just 5 POPS, which works out to roughly 357 TOPS per satellite — the equivalent of about 1.3 Jetson Orins per node.
Jetson Thor, by contrast, delivers 7.5× the compute of Orin. In other words, Thor is already much closer to the performance baseline claimed for those Chinese satellites. If you were to build a constellation around Thor-class DPUs, the number of satellites required to achieve 1,000 POPS would shrink dramatically.
Assuming linear scaling (a simplifying assumption), to achieve 1,000 POPS total compute:
Scenario | Compute per Satellite (TOPS) | Total Compute (TOPS) | # of Satellites in Constellation |
Three-Body Constellation with current compute [BASELINE) | 357 | 1,000,000 | 2800 (As published) |
With 2 AGX Orin on each satellite | 550 | 1,000,000 | 1818 |
With 1 Thor | 2070 | 1,000,000 | 483 |
With 2 Thor | 4140 | 1,000,000 | 242 |
Assumptions and thinking:
FLOPS vs TOPS: I recognise that converting between them is imperfect — FLOPS often involve more complex operations and varying precision. For simplicity, I’m using a 1:1 conversion here, aligned with NVIDIA’s own published 7.5× performance uplift versus Orin.
Power Budgets: Yes, Thor draws more power, and budgets will differ by mission profile. But if the key objective is maximizing compute per node, it’s generally cheaper to deploy more power on fewer satellites than to build, launch, and operate many more spacecraft.
This simple substitution exercise implies a potential 91% reduction in satellite count to deliver the same global AI compute. Instead of 2,800 spacecraft, you could theoretically reach the same 1,000 POPS with ~250 dual Thor-powered satellites.
The implications are profound: lower launch and manufacturing costs, less orbital congestion, and streamlined constellation management. More importantly, it shows how new processor generations don’t just make satellites faster, they change the entire economics of space-based computing.
Why Satellite Builders and DPU Manufacturers Should Care?
This performance leap opens new possibilities:
Reduced constellation size: Operators can design smaller but more powerful constellations, reducing CAPEX and OPEX.
New hardware benchmarks for DPU manufacturers: Thor’s energy efficiency and compute power set new standards for embedded space processors.
Enhanced multi-modal/mission capabilities: Potent Edge AI supports complex tasks like multi-modal data fusion and the ability to run transformer workloads more efficiently.
Built for robotics: Facilitates autonomous robotic maneuvers extending to rendezvous and proximity operations.
Facilitating Space domain awareness: asset monitoring with real-time analysis.
Key Takeaway: With AGX Thor, you get 7.5x more AI Computer Performance with only 2x more power. This means you achieve 3.75x more performance for every watt spent on compute.
Example 1: With AGX Thor, I can process 3,750 sq km of imagery on board the satellite using the same power and in half the time compared to processing 1,000 sq km of imagery on AGX Orin. I will not even compare less capable Jetson boards at this stage. | Example 2: Imagine having to process 10,000 sq km of RGB-NIR imagery end-to-end on Orin, and it takes 10 minutes to process it with Orin. With AGX Thor, it may be possible to process the same amount of imagery in 1 minute and 20 seconds, using 73% less power. |
Leveraging Vision Language Models (VLM):
Vision-language models (VLM) are a class of multi-modal AI systems that can understand both images (vision) and text (language) and connect the two.
For satellites and DPUs in orbit, VLMs unlock:
Real-time situational awareness: A satellite can analyze an image, interpret it, and send back a concise message (“Military convoy detected, heading east”) instead of a 500 MB image.
Reduced downlink load: By distilling gigabytes of imagery into kilobytes of text-based insights, you save bandwidth and time.
Autonomous tasking: A VLM onboard can “decide” what’s interesting and re-task sensors without waiting for ground control.
Cross-sensor reasoning: Pair an EO image with a SAR image and generate a natural language conclusion (“New structure detected despite camouflage”).
VLMs are computationally heavy. Running them on board an Orin-class device is possible only in very compact forms. Thor, with its 7.5 times more AI compute and larger memory, makes it feasible to run larger VLMs in real-time in orbit. That means satellites can move from “see and send” to “see, interpret, and act” autonomously.
Why Should End User Care?
Let’s be honest: most end users don’t care whether a satellite runs on Jetson Orin, Jetson Thor, or something entirely different. They don’t live in the world of processor specs or GPU architectures. What they care about and what they will notice is the value delivered back to them: faster insights, new use cases enabled on orbit, and ultimately lower costs for the services they rely on.
That’s why I always bring it back to fundamentals: the amount of data collected (and to be processed) drives the amount of compute needed. Once you know how much processing you can afford given the watt budget, the rest is an Engineering and Operations decision. It’s our job to make the sound choice of which DPU or processor to fly, but it’s the end user who benefits from the outcome.
Here’s how those benefits translate:
Faster insights, not faster chips. If I can reduce the time from “image captured” to “insight delivered” from five minutes to 90 seconds, the end user doesn’t care what silicon made it possible. What they see is actionable intelligence in near real time, the difference between reacting to a wildfire before it spreads versus after the damage is done.
More use cases unlocked. The jump in compute capability means we can push new workloads onto the satellite, real-time SAR/EO fusion, on-orbit vision-language models, or robotic autonomy for servicing missions. For the end user, this translates into more mission scenarios solved and fewer limits on what their satellites can do without ground intervention.
Operational cost savings. With more compute per watt, we can do more with fewer satellites, fewer downlinks, and fewer ground stations. That means lower operating costs and simpler constellations. End users may not see the engineering tradeoffs that make it possible, but they will absolutely feel the benefit when services become cheaper, faster, and more reliable.
At the end of the day, Jetson Thor matters because it raises the ceiling of what’s possible in orbit. The processor itself is invisible to most of the end user segments, but the speed, autonomy, and cost-effectiveness it enables are not. That’s what they value, and it’s on us as engineers and operators to deliver it.
Possible Exceptions: That said, I should acknowledge there are certain end users who will care deeply about which processor goes onboard, particularly those looking at deep space missions or applications well beyond the typical domains of communications, Earth observation, or space situational awareness. In these cases, radiation tolerance, fault resilience, and long-duration reliability often take precedence over raw performance. For them, the choice of processor is not just an engineering decision but a mission enabler, and NVIDIA-based DPUs will need further adaptation and proof in harsher environments before they can be fully trusted.
What To Expect Next From LPL?
At Little Place Labs, we don’t just talk about the future of edge computing in Space, we actively influence it. We are initiating R&D on the evolving Orbitfy software for the Jetson Thor module, focusing on benchmarking AI inference workflows relevant to Earth observation use cases. Here’s what comes next:
Benchmarks That Set the Standard – Running Orbify on Jetson Thor vs Orin to deliver trusted, mission-relevant performance insights for EO and constellation planning.
Prototypes That Unlock New Missions – Evolving Orbify to exploit Thor’s compute headroom for multi-modal AI and real-time on-orbit signal fusion.
Knowledge Sharing That Shapes the Ecosystem – Publishing papers and collaborating across government, industry, and academia to accelerate the adoption of physical AI in orbit.
Closing Thoughts
I fully acknowledge that NVIDIA processors are not necessirily radiation-tested, and some companies may find it challenging to operate them in harsh environments. But hey, thats COTS. For now, adoption will be most practical in LEO missions, where shielding and fault-tolerant design can mitigate risks. It will take time, iteration, and in-orbit demonstrations before Thor or its successors prove themselves as a standard payload for computing in space. There will be challenges we can’t fully predict until these systems fly, but with each mission, we close the gap.
I hope to see the first wave of Thor-based DPUs making their way to orbit by 2027 or, at the latest, by 2028. That’s not far away, and the companies that start designing for it now will be the ones leading the next generation of autonomous, intelligent satellites.
Comments