| It's important to understand this quote. OpenAI, Anthropic, Google, xAI and Meta are all operating with the same mindset. If they can access more compute, they can scale their products, improve performance, and generate more value. |
| That's why Huang described this moment as an inflection point. The industry has entered a self-reinforcing loop where demand for compute feeds on itself. The more capacity that comes online, the more it gets used. |
| The Next Wave Is Even Bigger |
| What most investors still don't appreciate is what comes next. We are moving beyond simple prompts and responses and into the era of autonomous AI agents. |
| These systems don't just answer questions. They run workflows, write code, analyze data, and make decisions. And importantly, they operate continuously. |
| |
| Now OpenClaw is designed to run locally on a computer, but the reality is that meaningful workloads will still be pushed into hyperscale environments where compute can scale efficiently. |
| This shift matters because persistent, always-on AI systems consume far more compute than the first wave of chatbot-style interactions. Which means the next surge in demand will be even greater. |
And it's not just OpenClaw. Another leading player in AI software, Perplexity, just recently launched Perplexity Computer, which has the ability to leverage all frontier AI models. Rather than being open-sourced like OpenClaw, Perplexity Computer has been productized with platform security and ease of use in mind. No programming skills are required at all. |
| The Problem We Can No Longer Ignore |
| This brings us to the real constraint… Not chips. Not software. Not capital. |
| Electricity. |
| If NVIDIA hits the scale Jensen Huang is signaling, we're no longer talking about incremental growth in data center capacity. We're talking about a step-function increase in global power demand. |
| Let's walk through what that actually looks like. |
| Assume NVIDIA reaches roughly $500 billion in annual revenue by 2027, driven primarily by its next-generation Vera and Rubin systems, alongside current Grace and Blackwell platforms. |
| At that scale, we're looking at deployments on the order of millions of GPUs. |
| Using NVIDIA's NVL72 architecture as a baseline, each rack contains 72 GPUs and 36 CPUs. To support that level of revenue, we arrive at roughly 125,000 racks – about 9 million GPUs – deployed globally. |
| Now consider the power draw. Each of these racks consumes approximately 200 kilowatts. Multiply that across the full deployment, and we're already at 25 gigawatts of electricity demand just to run the compute hardware. |
| But that's only part of the story. |
| Modern AI data centers require advanced cooling systems, networking infrastructure, and redundancy layers. When we account for total facility overhead using a typical power usage effectiveness (PUE) factor of 1.3, total demand rises to roughly 32 to 33 gigawatts. |
| And that's just NVIDIA. Once we factor in AMD systems and the growing use of custom application-specific semiconductors for AI by hyperscalers, total demand quickly approaches 60 to 65 gigawatts. |
| That's the true scale of what's coming. |
| A Growing Gap Between Supply and Demand |
| And this is where the problem becomes clear. |
| A majority of these AI data centers are being built on U.S. soil. Last year, the United States added about 53 gigawatts of new power generation capacity, the highest level since 2002. On the surface, that sounds like progress. |
| But it's not nearly enough. Even more concerning is what kind of power we're adding. |
| According to the EIA, roughly 65% of new capacity is coming from solar and wind, both of which are intermittent energy sources. Another 28% is going toward battery storage, which helps manage variability but does not generate new electricity. |
| When we strip all of that out, only about 7% of new capacity – roughly 6.5 gigawatts – is true baseload power capable of running continuously, the kind necessary to power AI factories. |
| And we're adding only a fraction of that in reliable, always-on power. |
| Even under the most optimistic projections, we're nowhere close to closing that gap. And historically, actual buildouts fall well short of what's planned. |
| Yes, we can attempt to reroute intermittent energy toward less critical use cases and reserve baseload power for AI. But that's a temporary fix, not a scalable solution. |
| Recommended Links Tesla Merger Coming March 31? New Nevada court filing reveals Tesla may be next in line for an Elon Musk Mega Merger. Last time Tesla merged well positioned investors saw 30x gains. Don't miss out. Legendary tech angel investor shares the details in this short briefing. Click now to avoid being left behind. The AI Dream is Crumbling… Just As Things Were Getting Good Tech Bros thought they were gods. That AI would appear if they commanded it. But the physical world doesn't care about what Silicon Valley wants. And the $635 billion they're throwing at it can’t change reality. What’s coming next in the markets is a full-on Regime Change – a violent reorganization of market winners and losers. Trillions will flee the Mag 7. And the money will find its way to a surprising new class of companies… Tomorrow at 1 p.m. ET, Eric reveals exactly where, including 15 free stock tickers to watch. Instantly reserve your free spot at FutureProof 2026 by clicking here. | |
| The Biggest Bottleneck in AI |
| New power is the biggest hurdle that Jensen Huang and NVIDIA face in realizing their lofty revenue targets. And this problem will get worse every year. NVIDIA's growth doesn't stop in 2027. Not even close. |
| Wall Street is already projecting another 20% growth in 2028, pushing revenue over $600 billion. If I were to make a bet, I'd take the over on that number. |
| And every incremental dollar of that growth means more electricity. This is the constraint Jensen is solving for. |
| He needs more than just faster chips. He needs a new deployment model for compute itself. |
| This is what made one of the most important announcements at GTC so easy to miss. |
| NVIDIA Looks Beyond Earth |
| Huang didn't just talk about next-generation systems for terrestrial data centers. He introduced something entirely different. |
| The NVIDIA Space-1 Vera Rubin module. |
| |
| NVIDIA GTC Presentation | Source: NVIDIA |
| At first glance, it looks like a compact version of NVIDIA's terrestrial GPU architecture. As we can see, the architecture pairs two Rubin GPUs with a single Vera CPU on a highly specialized board. |
| But that misses the point entirely. This system wasn't designed for Earth. It was designed for orbital AI data centers. |
| Operating in space introduces a completely different set of challenges. Radiation constantly interferes with electronics, causing errors that can crash traditional systems. Instead of relying on slower, hardened chips, NVIDIA engineered around the problem. |
| The system runs duplicate computations across paired GPUs and compares the results in real time, instantly correcting any discrepancies. Combined with advanced error correction at the memory level, this creates a highly resilient architecture capable of operating in one of the harshest environments imaginable. |
| And despite those constraints, performance is staggering. The module delivers a 25x leap over prior generations, enabling real-time AI inference, autonomous decision-making, and even orbital training of advanced AI models. |
| A specific launch date has not yet been announced. But NVIDIA said six companies are already using its accelerated computing platforms to power next-generation space missions. |
| Contenders in the Space-Based Data Center Buildout |
| This is also why Elon Musk is looking towards outer space to expand AI data centers with his constellation of one million satellites. |
| In our latest issue of The Near Future Report, we said: |
0 Response to "NVIDIA’s Extraterrestrial Announcement"
Post a Comment