Blogs -Nvidia CEO Predicts AI Spending Will Increase 300%+ in 3 Years

Nvidia CEO Predicts AI Spending Will Increase 300%+ in 3 Years


March 20, 2025

author

Beth Kindig

Lead Tech Analyst

Nvidia has traversed choppy waters so far in 2025 as concerns have mounted about how the company plans to sustain its historic levels of demand. It began with DeepSeek in late January, was furthered by suppliers providing mixed signals on the timing of its premiere Blackwell NVL systems, then saw rumors of data center cancellations from a major customer in February.

What better place to address these issues than the GPU Technology Conference (GTC) in San Jose, now dubbed the Super Bowl of AI. In the keynote held on Tuesday, Jensen Huang threw cold water on many of Wall Street’s assumptions, helping to alleviate concerns that demand for Nvidia GPUs will slow. In addition, I appeared on Fox News during the keynote to discuss why valuation is the great equalizer for this stock – along with my prediction for which quarter this year Nvidia will likely explode higher.

Nvidia Explains Why Cheaper Models Will Not Result in Less Compute

CEO Jensen Huang kicked the conference off with a wild remark about the current pace of progress in AI and the need for compute: “the scaling law of AI is more resilient, and in fact, hyper-accelerated, and the amount of computation we need at this point, as a result of agentic AI and reasoning, is easily 100x more than we thought we’d need at this time last year.”

The proof of this is easily seen as Blackwell chip sales have significantly outperformed Hopper year-over-year, with 3.6 million GPUs ordered so far in 2025 by the top 4 CSPs, versus a peak of 1.3 million Hopper GPUs in 2024. And this is just sales to the 4 largest CSPs, not including CoreWeave, Meta, xAI, Tesla, Nebius and many others that will be acquiring the chips. Huang added that “demand is much greater than that, obviously” -- with the readthrough being this is what they’re able to ship, with demand that exceeds current capacity.

Nvidia stock CEO Jensen Huang points out at GTC that Blackwell orders have reached 3.6 million GPUs so far in 2025.

Nvidia’s Blackwell chip sales so far in 2025 have far exceeded Hopper’s peak. Source: Nvidia

Huang further illustrated that due to AI being able to reason beyond pretrained data, it now generates more tokens at 10X for a complex model, yet compute has to be 10X faster, resulting in 100X more computation.

“Well, it could generate 100x more tokens and you can see that happening, as I explained previously, or the model is more complex, it generates 10x more tokens. And in order for us to keep the model responsive, interactive so that we don't lose our patients waiting for it to think, we now have to compute 10x faster. And so 10x tokens, 10x faster, the amount of computation we have to do is 100x more easily. “

Models will need to generate more tokens, more quickly; meaning, AI remains a hardware problem that Nvidia is uniquely positioned to solve. The amount of computation required for inference is significantly higher than previously estimated – and it’s this demand that Nvidia’s future generations of GPUs will aim to meet.

Huang Forecasts Capex to Grow more than 300% in 3 Years

Nvidia has been a massive beneficiary of big tech capex budgets. Our firm has been tracking Big Tech capex as a proxy for AI spending since 2022, when I publicly stated in my newsletter: “However, it has been our stance for some time that Big Tech capex is the true leading indicator for AI semiconductor companies. Despite an enormous increase in Big Tech capex primarily driven by data centers, this line item does not get the attention it deserves in terms of follow-through to the semiconductor industry.”

We’ve continuously reminded our readers that data center capex provides visible read-throughs for Nvidia as it captures a lion’s share of that spend, and GTC provided another clear signal that not only is capex not slowing as analysts fear, but is accelerating ahead of expectations.

At GTC, Huang pulled forward his view for $1 trillion in data center buildouts, saying he now sees the $1 trillion mark being reached as soon as 2028, ahead of prior expectations for 2030, representing an expansion of Nvidia’s addressable market.

Huang explained that he was confident that the industry would reach that figure “very soon” due to two dynamics – the majority of this growth accelerating as the world undergoes a platform shift to AI (the inflection point for accelerated computing), and an increase in awareness from the world’s largest companies that software’s future requires capital investments.

Nvidia stock CEO Jensen Huang at GTC explains that data center capex is accelerating and could reach $1 trillion as soon as 2028, ahead of prior views for 2030.

Nvidia CEO Jensen Huang predicts data center capex may reach $1 trillion as soon as 2028 as AI drives an inflection in computing. Source: Nvidia

Not only did Big Tech hit the $250 billion threshold in 2024, but these companies are on track to significantly exceed that in 2025, with Microsoft, Meta, Alphabet and Amazon likely to spend close to $330 billion on capex this year. This is easily more than double what was spent in 2023, and as whole, that represents 33% YoY growth for the four purchasing Blackwell en masse.

Based on Huang’s prediction that data center expenditures could reach $1 trillion by 2028, that’s 3x growth in 3 years, and Big Tech alone (not even including Oracle and others) is already at one-third of that this year.

Graph showing Big Tech capex surging 33% YoY in 2025, on track to reach to $330 billion.

Big Tech’s capex is on track to approach $330 billion in 2025, up 33% YoY and more than double what was spent in 2023. Should Huang’s prediction prove true, it will represent 300% growth in the AI DC infrastructure market in three brief years.

China’s tech firms are also quickly raising capex to remain competitive in the global AI war, with Alibaba signaling capex of $52 billion over the next three years, more than what it has spent over the past decade, while Tencent outlined faster capex growth as it purchases more AI chips. I have said previously on Fox Business News that AI spending goes up in times of war – and neither China nor the US will want to lose to the other when it comes to AI dominance.

The I/O Fund specializes in covering lesser-known AI stocks on our research site with trade alerts and weekly webinars. Learn more here.

Huang Explains Why Nvidia’s GPUs will Remain in High Demand

The breakthroughs we’ve seen in recent months and the rapid progression to complex problem solving and reasoning are increasing token usage by 100x and resulting in 10x faster computing power required to power the next stages of AI.

Tokens are the core factor going into the economics of an AI model – tokens for training represent the core part of the model costs, while tokens for inference generate revenue and thus profit. In a demo at GTC, Nvidia showed that for a complex problem with multiple constraints, a reasoning model like DeepSeek’s R1 would reason through the possibilities and answer with 20x more tokens using 150x more compute than a traditional model like Meta’s Llama 3.3-70B.

Translating this to the data center shows why Blackwell is in such high demand, to the tune that it has sold more than 2.5x as many GPUs already in 2025 versus Hopper’s peak. With Blackwell, which delivers up to 30x faster performance on inference versus the HGX H100, at 116 tokens per second per GPU versus 3.5 tokens per second, with 25x better energy efficiency. For a reasoning model, Huang explained that with Nvidia’s new Dynamo inference serving library, Blackwell can deliver up to 40x performance for reasoning models.

Here's why this is important. We explained last week in a brief writeup Unlocking the Future of AI Data Centers: Which Fuel Source Reigns Supreme in Efficiency? that power was the core chokepoint and the key enabler for AI’s future, as AI cannot exist without new sources of electricity to power its applications. Huang highlighted this at GTC, explaining that data centers are power limited, meaning revenues are power limited, hence why customers are looking for the most energy efficient chips they can get.

A 100MW data center (which is becoming more commonplace for hyperscalers) could house 1,400 H100 NVL8 racks and produce a maximum of 300 million tokens per second. With Blackwell, the same data center could house 600 racks but produce a maximum of 12 billion tokens per second, in theory a 40x increase. Increased inference performance leading to higher token outputs both lowers costs and increases revenue potential – Nvidia pointed out that DeepSeek-R1 based software optimizations improved token output and revenue generation by 25x and lowered inference costs by 20x.

While these maximums are theoretical in nature, the underlying notion that a data center can serve substantially more tokens at a lower cost supports Blackwell’s high demand, from a superior TCO profile and increased revenue generating ability.

Larger (and more) data centers expand the opportunity ahead for Nvidia – in the follow-up analyst call at GTC, Huang explained that “every gigawatt [of data center] is about $40 billion, $50 billion to Nvidia.”

According to CBRE, approximately 9.5 GW of data centers have gone under construction since the start of 2023. given an average construction timeline of 18 to 36 months (depending on constraints such as power supply), Huang’s comments imply a $380 billion to $475 billion revenue opportunity over the next 1 to 3 years just from that existing footprint under construction since 2023. We’ve already seen large data center announcements in 2025, with construction on the first $100 billion data center for Stargate commencing and Crusoe securing 4.5GW in natural gas for future data centers.

Upcoming GPU Roadmap Positions Nvidia to Capture $1T Data Center Spend

Nvidia is continuing to move at a break-neck pace when it comes to upgrading its GPU lineup, and maintaining this rapid release cycle is allowing it to continually pry away Big Tech’s capex year after year due to the performance, energy and TCO advantages each generation offers over the last.

At GTC, Huang unveiled Blackwell Ultra, the GB300 lineup, Vera Rubin and Vera Rubin Ultra, Blackwell’s successors, and an initial view at Feynman, Rubin’s successor.

GB300 NVL72 Delivers 1.5x Performance Upgrade

Notably, Nvidia provided little mention of the GB200 NVL72 during the keynote and offered no concrete evidence of shipping timelines for the superchip, opting to discuss Blackwell Ultra instead.

Blackwell Ultra, the GB300 NVL72, is due in the second half of 2025, with Huang expecting a smooth transition to the upgraded platform. The GB300 NVL72 provides up to a 1.5x performance boost versus the GB200 and delivers 50% more FP4 dense compute with a 50% boost to memory capacity, both of which will increase inference throughput.

Rubin Offers 3.3x Boost to GB300

Nvidia’s Vera Rubin NVL144 is scheduled for release in the second half of 2026, a year after the GB300 NVL72. Rubin is expected to be “drop-in compatible” to existing Blackwell infrastructure and offers up to a 3.3x boost to FP4 inference performance versus the GB300, with 3.6 exaFLOPs compared to 1.1 exaFLOPs.

Per chip, Rubin offers 50 petaFLOPs of FP4, up 2.5x from 20 petaFLOPs for Blackwell. Rubin also marks a shift to HBM4 memory, while remaining at 288 GB capacity.

Rubin Ultra Sees up to a 14X Increase in Inference Performance

Perhaps the largest boost in performance comes with Rubin Ultra NVL576, set to be released in the second half of 2027. Nvidia says the upcoming platform will offer up to 15 exaFLOPs of FP4 inference performance, a more than 4x increase from Rubin and nearly 14x increase from the GB300 in just two years.

While this leaves much for the supply chain to address in a short period of time (as we know Nvidia likes to break the limits of what’s possible), Nvidia is proving that it remains committed to the two things that matter most as AI continues to scale past generative AI to agentic AI and physical AI – it will continue to significantly boost inference performance via hardware improvements and software optimizations and reduce costs and thus TCO for its customers.

Put simply, data centers can handle more inference requests, process more tokens, and make more in revenue with each upgrade with the same power requirements.

Nvidia’s Valuation is the Equalizer

The major takeaway from GTC is that we’re only on the very brink of what AI can ultimately achieve. The need for compute will continue to rise as the industry progresses from generative AI to advanced reasoning models, to comprehensive AI agents, to autonomous vehicles and robotics where real-time inference is an absolute necessity for split-second decision making.

I spoke with Charles Payne on Fox Business News live during GTC to explain why I believe that the event’s major takeaway is that GPU demand is secular, not cyclical. I explained that Huang is “answering for investors why Nvidia’s GPUs will remain in demand. It does not matter if cheaper models are run on a single GPU, because ultimately, for these advancements to continue, we need to see that 10x in [faster] computing power, and we all know which company will serve that demand.”

Huang put it quite simply: “every single company only has so much power. And within that power, you have to maximize your revenues, not just your cost.”

While a lack of clarity and little mention of the GB200 NVL72’s timing during the keynote was likely a factor behind the muted stock price reaction, I would argue that Nvidia’s stock is absurdly cheap ahead of Q3 and Q4’s volume ramp.

Graph of Nvidia stock's forward P/E ratio showing stock is trading at the same valuation level as prior to Hopper's breakout May 2023 quarter. Source: YCharts.

Nvidia is trading at 26.5x forward earnings with growth of over 51% expected this year. Source: YCharts

Nvidia is currently trading at 26x this fiscal year’s earnings with earnings growth forecast to be 51.5% to $4.53, and at 20x next year's with 27% growth to $5.76. That 26x multiple is nearly a 25% discount to Nvidia’s average forward PE ratio over the past two years, and the same multiple it commanded before May 2023’s Hopper-driven breakout quarter.

Conclusion

Although there are many details from Nvidia’s GTC conference keynote worthy of discussion -- Big Tech capex is the single most important point for investors as the sheer amount of capital pointed at data center infrastructure from a handful of companies is truly unparalleled in the history of the markets.

We’ve continuously reminded our readers that data center capex provides visible read-throughs for Nvidia as it captures a lion’s share of that spend, and GTC provided another clear signal that not only is capex not slowing as analysts feared but is accelerating ahead of expectations.

In the more immediate term, we have mixed signals from suppliers on the exact timing of Blackwell’s GB200 NVL72s. The premiere SKU was originally expected to ship in volume in Q1 and that did not happen. Going into the February earnings report, I stated my spidey senses were up in the article “Nvidia Suppliers Send Mixed Signals for Delays on GB200 Systems – What It Means for NVDA Stock and cautioned the earnings report was unlikely to offer the blowout that investors have become accustomed to. This was despite Wall Street growing exuberant into the print and aggressively raising price targets.

Later, I/O Fund Portfolio Manager Knox Ridley stated that if Nvidia breaks $123-$119, the stock would likely find support between $102 and $83. This scenario remains a possibility given the weakness we have seen in the broad market. With that said, we see any dips on Nvidia as a buying opportunity as the stars are aligning for Q3-Q4 in terms of volume shipments on the Blackwell and Blackwell Ultra GPUs.

The I/O Fund has a strong track record on this stock, discussing every twist and turn publicly for our free stock newsletter readers with documented gains of up to 4,100% as far back as 2018 based on a very-early AI thesis. The I/O Fund sends real-time trade alerts for every entry and exit, and our research members will be notified via text when we deem the risk/reward favorable and resume buying Nvidia. Learn more here.

Disclaimer: The I/O Fund conducts research and draws conclusions for the company’s portfolio. We then share that information with our readers and offer real-time trade notifications. This is not a guarantee of a stock’s performance and it is not financial advice. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis. Beth Kindig and the I/O Fund own shares in NVDA at the time of writing and may own stocks pictured in the charts.

Recommended Reading:

Gains of up to 2,250% from our Free Newsletter.


Here are sample stock gains from the I/O Fund’s newsletter --- produced weekly and all for free!

2,250% on Nvidia

670% on Bitcoin

*as of Mar 04, 2025

Our newsletter provides an edge in the world’s most valuable industry – technology. Due to the enormous gains from this particular industry, we think it’s essential that every stock investor have a credible source who specializes in tech. Subscribe for Free Weekly Analysis on the Best Tech Stocks.

If you are a more serious investor, we have a premium service that offers lower entries and real-time trade alerts. Sample returns on the premium site include 3,580% on Nvidia, 860% on Chainlink, and 1,010% on Bitcoin. The I/O Fund is audited annually to prove it’s one of the best-performing Funds on the market, with returns that beat Wall Street funds.

beth
head bg

Get a bonus for subscription!

Subscribe to our free weekly stock
analysis and receive the "AI Stock: 5
Things Nobody is Telling you" brochure
for free.

More To Explore

Newsletter

Futuristic AI data center featuring NVIDIA’s GB200 Superchip, designed for AI superclusters, high-performance computing, and generative AI training with up to 27 trillion parameters.

NVIDIA’s GB200s for up to 27 Trillion Parameter Models: Scaling Next-Gen AI Superclusters

Supercomputers and advanced AI data centers are driving the AI revolution, enabling breakthroughs in deep learning and large-scale model training. As AI workloads become increasingly complex, next-gen

March 21, 2025
NVIDIA Blackwell Ultra GPU unveiled at GTC 2025, revolutionizing AI and HPC with unprecedented efficiency and power.

NVIDIA Blackwell Ultra Fuels AI & HPC Innovation, Efficiency and Capability  

NVIDIA’s latest Blackwell Ultra GPU, unveiled at NVIDIA GTC 2025, is transforming AI acceleration and high-performance computing (HPC). Designed for the “Age of Reasoning,” these cutting-edge GPUs del

March 21, 2025
Nvidia CEO Jensen Huang discusses AI market dominance at GTC 2025, addressing demand concerns and future growth projections.

Nvidia CEO Predicts AI Spending Will Increase 300%+ in 3 Years

Nvidia has traversed choppy waters so far in 2025 as concerns have mounted about how the company plans to sustain its historic levels of demand. At GTC, Huang threw cold water on many of the Street’s

March 20, 2025
AI data centers are driving the AI revolution, but their soaring energy demands pose sustainability challenges. With power consumption projected to rise 160% by 2030, data centers are integrating brown, clean, and renewable energy sources. Goldman Sachs predicts 40% of new capacity will come from renewables, but can solar, wind, and nuclear sustain AI’s 24/7 operations? Explore how hyperscalers are evolving their energy strategies to meet growing AI demands.

AI Data Center Power Wars: Brown vs. Clean vs. Renewable Energy Sources

AI data centers are at the heart of the AI revolution, but their massive energy demands raise critical questions. With power consumption expected to grow 160% by 2030, data centers are turning to a mi

March 19, 2025
Natural gas pipelines supporting AI data centers as energy demand surges, with Texas and Louisiana emerging as key hubs for AI infrastructure growth.

Why Gas Pipelines Are the Unsung Heroes of AI Data Center Expansion

Natural gas is emerging as the backbone of AI data center expansion, with demand expected to reach up to 6 billion cubic feet per day by 2030. As AI-driven infrastructure surges, data centers are turn

March 19, 2025
Alibaba’s AI revenue growth accelerates, but remains significantly lower than U.S. tech leaders like Microsoft, highlighting China’s competitive AI landscape.

Alibaba Stock: China Has Low AI Revenue Compared to United States

Alibaba’s AI-driven cloud revenue is surging with six consecutive quarters of triple-digit growth. However, its AI earnings remain a fraction of what U.S. tech giants report, with Microsoft leading at

March 14, 2025
By 2030, AI data centers may consume 9% of U.S. electricity as GPU power usage surges, with Nvidia’s GB200 reaching 2,700W. To ensure sustainability, data centers are adopting long-term PPAs and exploring high-efficiency energy sources like nuclear and SOFCs.

Unlocking the Future of AI Data Centers: Which Fuel Source Reigns Supreme in Efficiency?

AI data centers are projected to consume 9% of U.S. electricity by 2030, driven by soaring GPU power demands, with Nvidia’s GB200 reaching 2,700W—a 300% increase over previous generations. As AI racks

March 13, 2025
Tesla faces declining deliveries in 2024 and mounting challenges in 2025, with sharp sales drops in China and Europe, margin pressures, and shifting growth targets.

Tesla Has a Demand Problem; The Stock is Dropping 

Tesla’s growth faces major hurdles in 2025 after its first annual decline in deliveries. Sales are plunging in key markets like China and Europe, while margins remain under pressure. Optimism around r

March 07, 2025
Stock market data with AI and crypto trends highlighted. I/O Fund provides institutional-grade stock analysis, offering insights on AI, semiconductors, and Bitcoin. Stay ahead with expert research and real-time trade transparency.

I/O Fund’s Top 10 of 2024

The digital world is overloaded with noise—millions of posts, comments, and messages flood the internet every minute. For investors, this creates a challenge: filtering out distractions to focus on hi

March 06, 2025
Stock market data overlaid with social media activity metrics, highlighting the challenge of information overload for investors and the importance of quality stock analysis in the tech sector.

10 Timeless Free Articles You Won't Want to Miss

In a world flooded with information, investors face an overwhelming amount of noise. Quality stock analysis is the key to cutting through the clutter. At I/O Fund, we provide in-depth, free investment

March 05, 2025
newsletter

Sign up for Analysis on
the Best Tech Stocks

https://bethtechnology.cdn.prismic.io/bethtechnology/e0a8f1ff-95b9-432c-a819-369b491ce051_Logo_Final_Transparent_IOFUND.svg
The I/O Fund specializes in tech growth stocks and offers in-depth research for Premium Members. Investors get access to a transparent portfolio, a forum, webinars, and real-time trade notifications. Sign up for Premium.

We are on social networks


Copyright © 2010 - 2025