AMD vs Nvidia: The AI Stock That Could Win by 2028
June 20, 2025
Beth Kindig
Lead Tech Analyst
Last week, AMD offered more details on the release of their groundbreaking GPUs with little fanfare in the markets – which is par for the course as AMD has a history of being forgotten about until the company can no longer be ignored.
Five years ago, I dubbed AMD the “Dark Horse” for my premium research members as the company had a mere 4% share in the CPU-data center and was up against the near-monopoly of Intel. The term “Dark Horse” refers to a competitor that unexpectedly achieves victory as I was predicting AMD would eventually overtake Intel.
Two quarters ago, AMD posted CPU server market share of 39.4% -- officially surpassing Intel.
In the technology industry, the probability of an underdog successfully taking on a first-place contender with a formidable lead is incredibly rare. Yet, there is an element of catching the market off guard that helps to compound the returns. The opposite of this is known as a crowded trade.
Does AMD have what it takes to overtake Nvidia on stock performance in the next few years? Most investors assume Nvidia will continue to dominate — and AMD will remain a distant second. In this piece, I’ll walk you through why AMD’s positioning in the AI cycle could lead to an outcome few are prepared for.
Background on what AMD Achieved
When Lisa Su became CEO of AMD in 2014, the company was on the brink of bankruptcy, operating at a loss from 2012 to 2017. The huge bets the company made with the Zen architecture were bold, and saved the company from going under.
Pictured above: The Zen architecture released in 2017 helped AMD move from deep in the red to the black on margins. Source: MacroTrends
Examining how AMD was able to stage the comeback through architectural changes in CPU architecture, process technology, and chiplets is key for investors as not only did it result in over 3,600% returns in 10 years, but the company is now setting up to become a strong contender in the GPU server market.
Pictured Above: In 2022, Nvidia stock and AMD stock has seen returns in the same zip code before Nvidia’s meteoric rise. Will AMD catchup in the coming years? Source: YCharts
AMD Released the Zen 2 Architecture in 2019:
Five years after Lisa Su became CEO, AMD was preparing to not merely survive but rather to rival Intel. The Zen 2 architecture was an important release that allowed AMD to leapfrog Intel with a 7nm chip while Intel was still producing 14nm and 10nm chips. Because 7nm are twice as dense as 14nm, AMD was able to release a 64-core server chip and 128 threads rather than AMD’s previous 32-core server chip. Up until early 2019, Intel’s offering has been a 28-core server chip and 64 threads. The result of being first to the 7nm is that AMD was able to produce a more power efficient chip that allowed more cores.
The Zen-2 architecture also introduced a multi-chip module that used the most advanced technology where it’s needed most by combining 7nm chiplets with a 14nm die. This was quite a competitive leap as Intel was still using a monolithic design.
In this case, the 14nm was leveraged for memory controllers because the central hub runs input/output (I/O) and memory better. This helped AMD beat Intel on memory bandwidth. The design also greatly improved performance by putting the L2 cache on the core and the L3 cache across the core. Overall, these design improvements lower the power required while increasing the performance as it requires fewer NUMA hops, which in turn, increases instructions per clock, and this ultimately reduces latency.
AMD’s second-generation EPYC server processors sparked the company’s comeback with 1.8 to 2 times the performance advantage of Intel’s Xeon processors, but perhaps most importantly, EPYC 2nd Gen was at half the cost as Intel in some instances. Undercutting Intel on price became a virtuous cycle as driving down costs means more chips will be bought from AMD.
In a 2021 webinar on AMD’s stock that I held for Premium Members, I noted at the time that a third-party analyst named Michael Larabel benchmarked AMD as being 14% faster than Intel while costing about 30% less. The result is that for every $1.00 Rome chip sale, Intel lost $2.25 in Xeon SP sales. The savings can then be deployed to buy more Rome chips to further depress Intel’s revenue.
Since the Rome Series, AMD has been able to take more market share with the Milan Series and Bergamo Series with improvements such as 3D stacking in Zen3, tripling the L3 cache size while only adding four clock cycles of latency, and further customizing CPUs for cloud native workloads with less cache and more performance per watt. Genoa was the 4th generation, and provided more cache for general purpose workloads.
AMD versus Nvidia: Why Memory Gives AMD an Inference Edge
The word “inference” will come up a lot in the coming years for AI investors, and thus, it makes sense to have a brief discussion on how it differs from training.
- Training:
Training is the process of a model learning patterns from labeled data through internal parameters (called weights). There is forward and backward pass or propagation for updating the parameters. This phase is computationally intensive, requiring significant memory and parallel processing power.
Training is where Nvidia’s strengths are nearly insurmountable as the leader in combining parallel processing (CUDA) cores with matrix computations (Tensor Cores). Over the past few years, Nvidia has increased compute power by an order of magnitude to the point of defying Moore’s Law with architectural changes such as tensor cores and lower precision floating points.
For example, the H100 is able to switch from a 16-bit floating point to 8-bit floating point to significantly increase training speed by requiring less memory and speeding up data transfer operations. The transformer engine in the Hopper generation helps models to apply self-attention to detect how data elements in a series influence and depend on one another.
The second-generation transformer engine in the Blackwell architecture offers FP4. This is helpful because AI models are moving toward neural nets that lean on the lowest precision and yet still yield an accurate result. In this case, 4 bits double the throughput of 8-bit units, compute faster and more efficiently, and require less memory and memory bandwidth.
The premiere SKU shipping now is the GB200 NVL72, which delivers real-time trillion-parameter LLM inference, 4X LLM training, 25X energy efficiency, and 18X data processing. The GB200 also provides 4X faster training performance than the H100 HGX systems and includes a second-generation transformer engine with FP4/FP6 Tensor core. The 4nm process integrates two GPU dies connected with 10 TB/s NVLink with 208 billion transistors.
The point is that taking on Nvidia’s lead in training is not AMD’s goal. You can, of course, use AMD’s GPUs for training, but this isn’t where AMD can feasibly compete – and thus, its stock has suffered during the LLM training boom. Since the launch of Nvidia’s Ampere in May of 2020, the stock is up 1700% compared to AMD’s 135%.
You can read more about the history of Nvidia’s GPU architectures including Blackwell in the analysis: "Here’s Why Nvidia Stock Will Reach $10 Trillion Market Cap."
- Inference:
Inference takes batches of real-world data and quickly comes back with an answer or prediction --- therefore, this stage needs low latency (or speed) over raw compute power. For example, inference will take a trained model and produce a probable match for new data in milliseconds. While it can be compute-intensive for large models like GPT-4, inference generally prioritizes low latency, higher efficiency, and lower cost.
In many applications, it makes sense to run inference at the edge (closer to where data is generated). However, cloud inference is still widely used for models that are too large or resource-demanding to deploy on local devices. Compared to training, inference requires only the forward pass through the model, making it more efficient in terms of power and hardware demands.
If we go back and look at how AMD was able to take on Intel -- briefly, it was with an architecture that required less power at nearly half the cost. This helps illustrate that AMD’s strengths are a much better fit for inference rather than training.
Can AMD’s MI350X and MI355X GPUs Close the Gap with Nvidia?
Last week, AMD introduced its Instinct MI350 series GPUs, including MI355X with up to 4X performance over the previous MI300X generation and up to 40% more tokens per dollar compared to Nvidia’s B200 accelerators ...
Below, I tell you key things about AMD’s upcoming release and whether AMD has the chance to close the gap with Nvidia ...
Find out the following below:
- We compare AMD’s MI350X and MI355X with Nvidia’s B200s and GB200s to decipher if AMD has what it takes to close the gap with the AI leader
- Clear conclusions on the next 1-2 years that are tailored for stock investors and how we plan to position our portfolio
- The SKU that all investors should know about
Subscribe now and save $100 off our Advanced plan or $75 off our Pro plan.
Our five-year cumulative returns of 210% would place us as #2 if we were a hedge fund and #5 if we were an ETF.
Paid subscribers, click here to view the full article
Not ready to subscribe but want more thoughtful analysis from a top-performing team in tech? Every week, we publish free research. 👉 Sign up here.
Disclaimer: This is not financial advice. Please consult with your financial advisor in regards to any stocks you buy.
Recommended Reading:
Get a bonus for subscription!
Subscribe to our free weekly stock
analysis and receive the "AI Stock: 5
Things Nobody is Telling you" brochure
for free.
More To Explore
Newsletter
AMD vs Nvidia: The AI Stock That Could Win by 2028
Last week, AMD offered more details on the release of their groundbreaking GPUs with little fanfare in the markets – which is par for the course as AMD has a history of being forgotten about until the
This AI Stock is Set to Surge from Inference Demand
Up until now, the AI conversation has been dominated by training and compute, yet inference is showing signs of exploding growth. Microsoft and Google recently highlighted 5x to 9x YoY growth in AI to
Taiwan Semiconductor Stock: AI Growth Amid Geopolitical Risk
Despite their leadership, AI stocks like Taiwan Semiconductor and Nvidia are flat year-to-date and trading at similar levels as June 2024. Clearly, the AI trade is not as straightforward as it might s
Historic Market Uncertainty Meets $7 Trillion Debt Wall: What Comes Next for the S&P 500
We are seeing mounting evidence that this bounce may be the start of a new push to all-time highs, such as improved breadth, better than expected earnings plus the size of this bounce. However, one ca
Nvidia Stock Faces a Choppy Q2, But Tailwinds Build for H2 Acceleration
Nvidia’s streak of blockbuster earnings has turned investor expectations into a high-stakes game— anything short of perfection risks disappointment. As the company gears up to report fiscal Q1 results
Microsoft Stock Surges After Q3 2025 Earnings: What Separates Azure from AWS, Google Cloud
Microsoft stock jumped after Q3 2025 earnings as Azure emerged as the only major cloud platform to accelerate growth this quarter — a rare feat amid macro pressures. Azure’s 35% constant currency grow
Why Bitcoin’s Bull Run May Be Nearing a Top Despite Pro-Crypto Tailwinds
Since calling the Bitcoin bottom near $16,000 in late 2022, the I/O Fund has maintained a disciplined, contrarian approach — issuing 13 buy alerts before Bitcoin surged above $100,000. Now, signs sugg
2025 Market Outlook: Why Stocks and Bonds Are Signaling More Volatility
As the S&P 500 reaches a key bounce target, troubling signs in bonds and consumer behavior suggest this market rally may be on thin ice. I/O Fund’s Knox Ridley explains why volatility may intensify an
The Impact of Tariffs on the Stock Market: Q1 Preview
Rising tariffs are injecting significant uncertainty into the stock market, triggering daily volatility and forcing analysts to revise earnings estimates. Our Q1 preview dives into the potential impac
Tesla Stock Faces Recalibration of Growth Expectations
Tesla’s stock is now facing a recalibration of expectations after Q1’s delivery report missed by a wide margin. Q1’s analyst consensus has gone from $25.98B at the start of the year to $23.97B in earl