Blogs -Nvidia’s $20 Trillion Thesis Is Intact. My 2026 Allocation Isn't

Nvidia’s $20 Trillion Thesis Is Intact. My 2026 Allocation Isn't


April 24, 2026

author

Beth Kindig

Lead Tech Analyst

Recently, I've reiterated my $20 trillion market cap thesis for Nvidia, which implies upside of roughly 310% over the next four years. However, my thesis does not hinge on Nvidia reaching that milestone through hardware alone. Instead, the thesis hinges on software advancements and the recurring revenue that will inevitably come from Nvidia's lead in robotics and simulation. I have emphasized the growing importance of Nvidia's software business relative to hardware since 2023. 

The distinction cuts both ways. By arguing that software is central to the $20 trillion thesis, I am also implying that Nvidia's hardware moat becomes less effective over time. Seven years ago, when Nvidia was still a roughly $100 billion company trading near $3.15 on a split-adjusted basis, my original thesis for why it could become the world's most valuable company centered on the CUDA moat. At the time, I wrote: "Developers will self-regulate the number of competitors for processing units due to a need for a universal platform that supports all frameworks." 

My firm, the I/O Fund, has held Nvidia through the full seven-year journey, sometimes at an allocation as high as 20%, through both remarkable upside and equally remarkable downside (may be hard to remember, but the stock was down 60% in 2022 when I publicly defended the stock). 

The thesis on Nvidia's hardware moat has played out exceptionally well, but that also highlights one of the biggest risks investors face, which is becoming emotionally attached to a winning stock. While I still believe Nvidia will reach $20 trillion by 2030, I believe much of that 310% return is likely to be back-half weighted in the years of 2028-2030. This is what separates investors from AI enthusiasts. While an AI enthusiast can sit back, relax and discuss specifications and other fandom, an investor must always answer — is my capital better deployed elsewhere? 

Is Nvidia Stock Still the Best AI Stock in 2026?

So far in 2026, answering the question of where to deploy capital has not been easy to answer. Nvidia's stock only recently turned positive; the QQQs are barely positive this year, as is the same for many tech-related ETFs such as IVES, GRNY and ARKK.  

In sharp contrast, the I/O Fund is up roughly 33% year-to-date, reflecting a willingness to follow the opportunities as they shift across the AI landscape. We count recent winners such as Bloom Energy, which is up 1100% since our initial entry, an optical networking name we highlighted ahead of its 2026 surge, now up nearly 300% YTD and 650% since our lowest entry in November. Plus, a photonics position we doubled down on in January with a 10% allocation that has since gained more than 130% year-to-date. 

The same framework that surfaced those opportunities is what tells me Nvidia's 2026 setup may no longer be as rewarding as what I can find elsewhere. The analytical case comes down to three things: the CUDA moat matters less with inference, custom silicon is gaining market share, and the delay in Rubin creates uncertainty at exactly the wrong moment.

On the flip side, the valuation is lower than its historic average, and in a volatile market, Nvidia could still stand out simply by continuing to post stronger earnings growth than most of large-cap tech. The company will remain the dominant system-level player in AI, and the CUDA moat will certainly not vanish overnight. 

The debate, in my view, is not about whether Nvidia stays important. It is about whether the return profile is still as compelling as what can be found elsewhere in the AI trade.  

CUDA Matters Less as AI Inference Takes Over

In 2018, my original thesis on why Nvidia can become the world's most valuable company was centered on the moat the CUDA platform provides when I stated: "Nvidia is already the universal platform for development, but this won't become obvious until innovation in artificial intelligence matures. Developers are programming the future of artificial intelligence applications on Nvidia because GPUs are easier and more flexible than customized TPU chips from Google or FGPA chips used by Microsoft [...] When artificial intelligence matures, you can expect data center revenue to be Nvidia's top revenue segment. Despite the corrections we've seen in the technology sector, and with Nvidia stock specifically, investors who remain patient will have a sizeable return in the future."  

At the time, Nvidia's data center revenue was 1/6th of Intel's — whereas today, the AI juggernaut reported $194 billion in data center revenue compared to Intel's $17 billion. Although you could pontificate on the many defensible design elements of Nvidia's AI systems, one way to simply describe this historic ascent is that the mature libraries and frameworks from CUDA makes it hard for an engineer to go anywhere else. Notably, that's not a regurgitated thesis, but rather my thesis on the stock implications of the CUDA moat pre-dated the Street and AI experts by many years. That’s important because I am shelving that thesis as the inference market approaches. 

There is an incoming shift to my original investment thesis from 2018. 

Programming GPUs with the CUDA platform is primarily a training exercise as this is the phase where engineers are experimenting and need the developer ecosystem, including extensive tools like cuDNN, NCCL, debugging, custom kernel support, and CUDA's massive libraries. The ecosystem has been built for over 20 years, has over 6 million developers contributing and every ML framework is first optimized for CUDA. The switching costs today remain extraordinarily high for engineers. 

To contrast, inference is repetitive to where once a model is trained, the model is running millions of times per day. Serving platforms and inference frameworks like vLLM and TensorRT-LLM reduce dependency to develop on a specific software platform, like Nvidia's CUDA. 

Training a frontier model is a one-time, multi-month event. Inference, by contrast, is the revenue-generating phase. Every ChatGPT query, every Copilot suggestion, every Waymo autonomy decision is inference. As frontier labs reach the limits of practical model size and enterprise AI adoption scales, inference workloads are projected to grow several times faster than training workloads through the rest of the decade. The segment where CUDA's moat is strongest is becoming a smaller share of total compute, while the segment where it is weakest is becoming the larger share.

mid

There is also more of a push toward open standards for the inference phase to reduce dependency on hardware specific code for serving paths, as tools like ONNX runtime, vLLM and the compiler Triton help to export models (or compile them) to be run agnostically on any AI accelerator. 

In response to CUDA's moat weakening in the inference phase, Nvidia has pushed for their inference stack to remain proprietary by offering inference optimization software called TensorRT-LLM. TensorRT-LLM analyzes and optimizes LLMs to improve performance by fusing multiple operations into a single GPU kernel, selecting the optimal precision and optimizing memory usage for the key-value cache. Overall, Nvidia states this leads to 5X faster model performance for inference. 

However, something to consider is that Nvidia is needing to make this new attempt at preserving its ecosystem as the CUDA empire will not neatly hold as the inference market plays out. The open-source market is growing to become a serious contender to proprietary optimization software like TensorRT-LLM, as alternatives that are more community driven are available and accomplish something similar, such as vLLM and SGLang. Both have moved from research-project status to production deployment at major AI operators, with vLLM in particular now powering inference at some of the largest LLM serving workloads outside of the hyperscalers themselves. Furthermore, large inference players like Cloudflare can build their own custom engines. 

The point is not to be an alarmist, but rather to note when the piece most central to my original thesis is shifting. CUDA will remain the most popular software development platform in AI by a wide margin, however, the freedom to go elsewhere is something Nvidia has not contended with at this level. 

Custom Silicon is Undeniably Increasing in Market Share 

A few months back, the market had a brief scare around what Google's TPU v7 Ironwood might mean for Nvidia's grip on AI compute. The concern was not simply that Google had built another custom chip, but that Ironwood was introduced as the first TPU designed specifically for inference. 

At the time, Google emphasized better power efficiency and stronger "intelligence per dollar" for serving workloads. Ironwood scales up to 9,216 chips, delivers 42.5 exaflops in its largest pod, and Google has paired it with software support such as vLLM on TPU, reinforcing the idea that inference is becoming a more open and cost-sensitive market than training. We covered this more in the write-up: "This AI Stock is Set to Surge from Inference Demand." 

Although Ironwood v7 offers major headway in narrowing the performance gap with Nvidia on inference workloads, the reality is that custom silicon programs require long development cycles. Designing the chip is only the initial stages, and from there, hyperscalers need to optimize the compiler stack, optimize frameworks and also validate performance at scale. The result is a far slower product road map that typically lags Nvidia's current generation of GPUs. This lag puts additional emphasis on Nvidia delivering on time. 

Why the Advantages of Custom Silicon Outweigh Development Timelines

Nvidia's data center GPUs carry gross margins above 70%. For companies spending $50-100 billion annually on AI infrastructure, the savings from moving even 20-30% of inference workloads to in-house silicon compounds into tens of billions of dollars per year. That math is driving Google and Amazon to accept slower product cycles in exchange for architectural independence. It is also the math incentivizing Meta and Microsoft to follow suit. Perhaps most importantly, the inference market will offer a catalyst for custom silicon compared to training because workloads are more specific, and cost savings can be achieved at massive volumes. 

Below is what a few industry analysts are predicting. Although I believe these are aggressive, they help to illustrate the challenges in front of Nvidia. 

Counterpoint Research believes that by 2028, custom silicon will cross the 15-million mark to surpass GPU shipments as the top 10 hyperscalers will have deployed 40 million AI server compute ASIC chips cumulatively during 2024-2028, stating: 

"What is also supporting this unprecedented demand is AI hyperscalers building significant rack-scale AI infrastructure based on their in-house stacks, such as Google TPU Pods and AWS Trainium UltraClusters, enabling them to operate as one supercomputer." 

TrendForce is the most aggressive forecast, stating GPU-based AI servers will account for 69.7% of shipments in 2026 with ASIC-based servers rising to 27.8%. This doesn't account for GPU market share from AMD, which if you put that at 10%, would result in Nvidia's market share being 59.7%. 

With the information that I have today, these forecasts could be too aggressive. 

According to Broadcom, they'll see $100B in AI chip revenue in 2027 and we've modeled another $50B in networking. If we allocate $45B base case to AMD and go with what we know of Nvidia's stated trajectory to $1 trillion in revenue, then the split looks something more like this for 2027: 

  • NVDA $500B 
  • AVGO $150B to $200B (assuming mgmt team was being conservative we will use the $200B number) 
  • AMD $45B 
  • Total among top 3 silicon providers: $745B with NVDA at 67% market share versus the 59.7% implied above 

However, one data point that complicates things is MediaTek could see 150,000 CoWoS wafers in capacity in 2027, compared to 20,000 in 2026. Thus, the landscape is evolving in terms of the number of competitors. 

Notably, the level of erosion may be up for debate, but the most probable outcomes do not favor Nvidia continuing to dominate AI accelerator sales at the level it has in the past. In training, Nvidia represented 90% of workloads. 

There are many moving parts, but if we do assume that Nvidia sees 70% of market share, down from 90% previously, and capex grows at 60% year-over-year, then Nvidia's growth rate would be 24%.  

Here's what a sensitivity analysis looks like:

Chart showing Nvidia revenue growth sensitivity under 60% AI capex growth and declining GPU market share to approximately 63%

Pictured above: Nvidia revenue growth sensitivity analysis assuming 60% annual AI capital expenditure growth and varying Nvidia GPU market share in 2027. Under consensus estimates from TrendForce and Counterpoint, Nvidia’s GPU share declines to roughly 69%, or about 63% after accounting for AMD capturing 6% of the GPU market, implying a revenue growth rate of approximately 15.6%.

For the calendar year ending in January 2028, analyst estimates are at 30.1% growth. Note the numbers in the sensitivity analysis are for compute only, and does not include networking, which is growing rapidly and estimated at roughly 160% in the upcoming quarter. 

The Bull Case Hinges on Valuation 

Even with the supporting data above, I have kept a ~5% position this year in Nvidia as the growth profile combined with earnings profile is hard to beat across most tech stocks. The company is expected to see >50% growth on both the top line and the bottom line this year. This growth combined with flat price action for about a year has led to an attractive valuation. 

Chart comparing Nvidia stock P/E ratio of 40.7 to its 3‑year median valuation of 55.29

Pictured Above: Nvidia stock trades at a P/E ratio of 40.7 compared to the 3-year median of 55.29. Nvidia is currently trading 26% lower than the median.

Source: YCharts

Going back to my introduction, the question for a portfolio manager isn't whether Nvidia is fairly valued today. It's whether the capital compounds faster in Nvidia’s stock over the next twelve months than in the many alternatives we've identified.  

I just dropped my Top 15 List of AI Stocks -- this list ranks the companies I believe will define the next year and whose fundamentals are on fire. The 70-page report is for premium Pro and Advanced members, sign up here.

Rubin Delay and HBM4: A New Risk for Nvidia Stock in 2026

In a previous free newsletter, I had stated Nvidia’s product road map is the second line of defense should the CUDA moat be breached. What happens when both are breached? That is not a scenario that I originally modeled for. 

The reported one-quarter delay on Rubin is terrible timing, to be frank, as it coincides with the timing improving for when custom silicon becomes more attractive for Big Tech (which is ultimately aligned with the incoming inference market). The delay in Rubin not only allows custom silicon one more quarter to catch up, but it also makes for a strong case for having back-up orders across Broadcom, Mediatek and/or AMD for supply chain diversification.  

HBM4 validation times have been cited as one key factor behind the delays for Nvidia’s upcoming Vera Rubin generation – we have seen in the past that these qualification tests can extend as long as 18 months, such as in Samsung’s case with HBM3e. Currently, reports suggest this HBM4-related delay could persist for one quarter. 

Reports suggest this delay stems from Nvidia pushing suppliers to “request speeds of over 11 Gb/s per pin,” well above the JEDEC standard of 8Gb/s. More evidence for a delay is surfacing, with DigiTimes reporting on April 15 that SK Hynix is “considering reducing its planned 2026 shipments of high-bandwidth memory (HBM4) to Nvidia by about 20-30%.”  

We also have another report stating SK Hynix is delaying its HBM4 production ramp until Q3, instead of its original Q2 target, with the delay said to better align with Nvidia’s schedule. Any potential delays or shipment cuts at SK Hynix also could be a key factor in a Rubin delay, as SK Hynix reportedly secured more than 70% of HBM orders for the upcoming chip; on the other hand, Micron and Samsung both have announced that HBM4 is in mass production for Vera Rubin, easing some of the supply constraints. 

Overall, I am not too concerned about 2026 revenue as Blackwell orders are likely to help backfill the Rubin delay. This is less about a revenue miss and more about the strategic shift toward custom silicon. Lastly, Rubin could be more than a one-quarter delay. To compare, Blackwell was a two-quarter delay. The unknown around exactly how long the delay will be is an additional risk that Nvidia investors will have to absorb. 

Nvidia: Seeking to Defend its Throne 

Last quarter, inventories increased more than 8% QoQ to $21.4 billion, but more importantly, Nvidia's supply-related commitments surged. We highlighted this last quarter as a key sign that the strong data center QoQ revenue inflection would continue. 

In Q4, Nvidia's supply-related commitments surged nearly 90% sequentially to $95.2 billion, a major step-up from the prior ~$28-30 billion range through late FY25 and the first half of FY26. Nvidia says it is strategically securing inventory and capacity to meet demand beyond the next several quarters, which we believe serves as a key sign that the current accelerated QoQ data center growth of ~$10 billion will likely persist as Blackwell Ultra continues ramping and as Vera Rubin also eventually ramps. 

While initially, this could be taken as evidence that Blackwell's ramp is persisting; the more likely outcome now is that it signals a Rubin delay. If this is true, the risk is that it sits on the balance sheet until Rubin ships. However, another more likely outcome is that most of these commitments could be converted to Blackwell and Blackwell Ultra. 

TrendForce data supports this theory, stating that industry watchers expect Rubin to account for 22 percent of Nvidia's high-end GPUs, down from 29 percent. As stated above, the reason is: "time required to validate the newer HBM4 memory used by the chips, challenges with the migration to Nvidia's faster ConnectX-9 NICs, the system's higher overall power consumption, and the more advanced liquid cooling requirements [are] contributing to the delays." 

In the same article, the stated assumption is that Blackwell mix rises to 71% while Hopper is down to 7% from original expectations of 10% due to China tensions. 

According to additional checks, this is aligned with Keybanc, stating 2026 supply is expected to support "5.5M-6M Blackwell GPUs, 1.5M Rubin, and 1M Hopper GPUs." KeyBanc's estimates imply higher Hopper revenue — which is what could sting slightly — as these numbers would make up roughly 69% to 71% of Nvidia's 2026 GPU output, while Rubin accounts for about 18% to 19% and Hopper about 12%. Keybanc also cut VR rack estimates by 50% to 6K, down from 12-14K. 

As stated, the Rubin delay may not result in a large impact on revenue as Blackwell is still supply-constrained. One could argue the Rubin delay could help Blackwell’s pricing remain elevated for longer to not have the next generation putting pressure on average sales prices.  

The bigger issue isn't losing the markup in the near-term but rather: (1) is the delay truly only one quarter — we've been here before with Blackwell and the delay was two quarters, and (2) Nvidia's product road map will no longer be seen as invincible.  

Nvidia Stock Long‑Term Outlook: The $20 Trillion Thesis Revisited

Our catalysts to the $20 trillion thesis remain, which is a strong product road map, analyst estimates being far too low in the 2028-2030 window, but even more importantly, my prediction is that Nvidia exits the decade as one of the largest AI software companies. We saw how quickly the company overtook Broadcom as the largest Ethernet company; something similar is what my $20 trillion thesis hinges on, but rather with Nvidia dominating a large portion of the software market across robotics and automation.

Conclusion 

Nvidia remains one of the most important companies in the AI era, and I continue to believe the stock can reach a $20 trillion market cap by the end of the decade. What has changed is not the destination, but the path. The hardware moat that powered the first phase of Nvidia’s ascent is becoming less absolute as inference grows, custom silicon improves, and the next 1-2 product cycles carries timing risk – including both Rubin and Rubin Ultra. 

My thesis hinges on Nvidia reaching $20 trillion with software as the primary catalyst. The issue more near-term is that the market is still largely valuing Nvidia through hardware, just as the durability of that moat is becoming more open to debate.  

For those who have followed me since 2018, it has been a fantastic ride. I am still looking for the same thrill of steep upward stock trajectories unique to the AI market; only in different tickers.

The I/O Fund has built a strong track record in lesser-known AI winners, including Bloom Energy, up 1100% since our initial entry last year, an optical networking stock up more than 620% since November, and one of our largest positions at a 10% allocation already up 130% year to date. We publish more than 100 paywalled articles each year on AI stocks, supported by an actively managed portfolio and real-time trade alerts. Don’t miss out on the AI trade. Learn more here

Please note: The I/O Fund conducts research and draws conclusions for the company’s portfolio. We then share that information with our readers and offer real-time trade notifications. This is not a guarantee of a stock’s performance and it is not financial advice. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis. Beth Kindig and the I/O Fund own shares in NVDA at the time of writing and may own stocks pictured in the charts.

Recommended Reading:

head bg

More To Explore

Newsletter

Nvidia microchip centered between AMD and AVGO chips floating above Earth in space, symbolizing the global semiconductor and AI technology market.

Nvidia’s $20 Trillion Thesis Is Intact. My 2026 Allocation Isn't

The thesis on Nvidia's hardware moat has played out exceptionally well, but that also highlights one of the biggest risks investors face, which is becoming emotionally attached to a winning stock. Whi

April 24, 2026
Illustration of a Bitcoin coin split by contrasting market conditions, with rising U.S. dollar visuals, tightening liquidity effects, and declining price charts symbolizing downside pressure on Bitcoin.

Bitcoin 2026 Price Prediction: Why the Dollar, Global Liquidity and Volume Signal More Downside Ahead 

In our last Bitcoin analysis, "Bitcoin After the Cycle Peak: What Comes Next and How We're Positioning", we argued that Bitcoin was closer to a cycle low than most believed, even if one final drop rem

April 17, 2026
Holographic multi‑layer chart of the S&P 500 showing price fluctuations across transparent data planes, with the “S&P 500” label embedded in the bottom layer, representing sector breakdown and market volatility.

2026 Stock Market Outlook: Cycle Convergence & What's Next

In our last broad market update, the S&P 500 was trading near 6,850, grinding through its fifth consecutive month of going nowhere. I drew a clear line in the sand at the 6,780 level. This was where t

April 10, 2026
An AI CPU chip with abstract blue and magenta neural network waves, symbolizing the growing role of CPUs in agentic AI workloads.

Arm Stock Could Win as Agentic AI Shifts the Bottleneck to CPUs

Arm unveiled an AGI CPU to address one of AI’s biggest bottlenecks, which is orchestration. During the chatbot craze of 2023-2025, GPUs did most of the heavy lifting while CPUs had become an afterthou

April 02, 2026
A futuristic data center hallway with green-lit server racks, overlaid growth charts, a digital world map, and the word “NVIDIA” centered, suggesting global high-performance computing and growth.

Nvidia Stock Prediction: The Path to a $20 Trillion Market Cap is Strengthening

The $20 trillion market cap will not come from GPU unit growth alone, though unit growth remains very important. Rather, the value proposition will increasingly focus on economic output. This marks a

March 27, 2026
AI server rack and processor chip with green and gold vertical light streams in a futuristic data‑center background.

Nvidia Stock to See New Growth Catalyst; 35X Faster AI with Groq 3 LPX

At GTC this week, Jensen Huang stated the revenue opportunity for Nvidia’s artificial intelligence chips may reach at least $1 trillion through 2027, up from a previous target of $500 billion. While t

March 20, 2026
Palantir sign with overlaid stock market chart patterns.

Palantir Stock is Out of Favor, but is the Growth Engine Still Intact?

Palantir stock sold off 38% from November to February and is down about 10% year-to-date. Even so, it has held up better than many software peers given the software sector has taken it on the chin lat

March 13, 2026
Multiple monitors displaying stock market charts with a sharp red downward arrow indicating a market decline, viewed from behind an individual.

“Tech Bubble” Warnings Cost Investors a 550% Nasdaq-100 Run

Investors have been hearing “tech bubble” warnings for more than a decade — but instead of collapsing, the Nasdaq‑100 has gained 550%. If we look back ten years ago to 2015, headlines such as “Sell ev

March 06, 2026
Bloom energy storage units surrounded by circuit‑like lighting, with a modern city skyline in the distance.

My Top 2026 Stock Pick for the AI Boom

The market is fixated on when Big Tech will generate economic value from the $650 billion+ being poured into AI data center expansion annually. The market is missing the point. Monetization has never

February 27, 2026
Graphic displaying I/O Fund's 326% cumulative returns with an upward bar chart.

I/O Fund Jumps to 326% Cumulative Return, Ranking Among Wall Street’s Best

I’m pleased to share the I/O Fund’s audited 2025 return of 37%, bringing cumulative performance since our May 2020 launch to 326%. This represents a 294% lead versus popular tech ETFs and a 152% outpe

February 24, 2026
newsletter

Sign up for Analysis on
the Best Tech Stocks


Copyright © 2010 - 2026