Broadcom Stock: The Silent Winner in the AI Monetization Supercycle
December 11, 2025
Beth Kindig
Lead Tech Analyst
When discussing the AI Monetization Supercycle, I would be remiss not to highlight Broadcom. The AI accelerator market will inevitably widen beyond Nvidia’s GPUs - the keyword is widen. More players will sell more AI systems as the market expands, and that growth supports both the clear leader (Nvidia) and those already in pole position, such as Broadcom.
Last week, amidst a flurry of noise in the AI market, my firm wrote an article on the AI Monetization Supercycle that is not being priced in. The analysis suggested the predominant risk is not an AI dot-com bubble or various headlines weighing on sentiment, but rather the risk investors face is missing out on what may be one of the strongest investing opportunities of our lifetime: what I’ve dubbed the AI Monetization Supercycle catalyzed by the inference phase.
While many refer to this as the “AI Supercycle,” I believe Monetization is a critical word missing from that description. The hallmark of the next phase will not be the architectural leap toward AI superintelligence (although important) - but rather, it will be defined by the ability to monetize this very expensive technology. As an investor, I am obligated to care more about the latter.
Which brings us back to Broadcom—a stock my firm highlighted in our free stock newsletter last June in an article entitled “This Stock is Set to Surge from Inference Demand.”
At the time, I wrote:
“Broadcom has already benefited from both increasing compute and networking needs – but the surge in inference demand will disproportionately (and positively) flow to Broadcom’s top line and bottom line. This is because custom silicon’s cost advantages and ability to drive lower inference serving costs at scale creates a strong value proposition for Big Tech. As more and larger clusters are deployed to serve exploding inference demand, there will be additional long-term tailwinds for the Ethernet networking giant.”
The inference phase – what I'm calling the Monetization Supercycle – is squarely in front of us. While many will understandably point toward companies like OpenAI as the biggest beneficiaries, it is one of the market’s greatest misconceptions that platform owners always outperform suppliers (hardware stocks). During the mobile era, Broadcom’s stock outperformed Apple precisely because it supplied RF and connectivity components to the iPhone giant.
Below, we look more closely to see if the “silent winner” Broadcom stock can repeat that outperformance again.
Stock Price Comparison Chart: $AVGO vs $AAPL. Broadcom Stock significantly outperformed Apple stock in the 10-year cycle of the mobile boom era, delivering a return of 1,490% compared to Apple’s 623%. Source YCharts
Google TPU Ironwood v7: The Custom AI Chip Built for Inference
Last April, Google announced that its upcoming seventh-gen TPU Ironwood is its “most performant and scalable custom AI accelerator to date, and the first designed specifically for inference.” Individual Ironwood TPUs are interconnected into larger units called pods, coming in two sizes, a 256-chip pod and a 9,216-chip Superpod, with the larger size offering up to 42.5 exaflops of performance. Notably, the Superpod would deliver 24x the compute of El Capitan, the largest supercomputer in the world. The rack-scale architecture offers 64 TPUs compared to Nvidia’s racks with 72 GPUs, with a small cluster being four pods connected through an optical circuit switch network. While TPUs may excel at driving down costs on certain workloads, Nvidia’s GPUs still lead when it comes to processing performance.
mid
Google adds that Ironwood offers 2x the performance per watt as last-year’s generation Trillium, with 6x more HBM and 4.5x the HBM bandwidth; versus TPU v5p, released in 2023, Ironwood brings a more than 10x improvement in peak performance per chip and per pod. The substantial increases in memory and bandwidth are critical for maintaining high performance when processing larger data sets while the improvements in power efficiency allows inference workloads to be run in a cost-effective manner.
It’s widely understood that Broadcom supplies Google with its custom TPUs. The incoming inference growth curve, that the I/O Fund detailed here, has led CEO Hock Tan to state Broadcom may witness an acceleration of XPU demand into the back half of 2026. He said, “In fact, what we've seen recently is that they are doubling down on inference in order to monetize their platforms. And reflecting this, we may actually see an acceleration of XPU demand into the back half of 2026 to meet urgent demand for inference on top of the demand we have indicated from training.”
Something similar was echoed in the FQ3 call, with Tan stating: “But also as for these guys, they got to be accountable to being able to create cash flows that can sustain their path. They [are] starting to also invest in inference in a massive way to monetize their models.” On that note, Google’s TPU business received a significant vote of confidence recently with Anthropic signing a deal for up to one million TPUs, including Ironwood, coming online in 2026. The deal is said to be worth tens of billions.
For Broadcom, the TPUs are expected to be the primary driver of AI revenue growth in fiscal 2026 – estimates from HSBC earlier this summer projected Google’s TPUs to represent ~58% of Broadcom’s ASICs shipments at 1.79 million, but account for ~78% of ASICs revenue at $22.1 billion. This is because Google’s TPUs were estimated to carry a significant price premium at $13,000 per chip versus Broadcom’s other projects at $5,000 per chip. However, this is still less than half the cost of Nvidia’s chips at $30,000 to $40,000 for a solo B200 ($60,000 to $70,000 for a GB200).
Looking beyond fiscal 2026, projections for TPU shipments are surging. Morgan Stanley now expects 5 million TPUs to be shipped in 2027, a 67% rise from its prior estimate for 3 million; for 2028, the firm estimates shipments as high as 7 million, a 120% increase from its prior estimate. This would project YoY growth of 40% from 2027 to 2028, a substantial increase from 6% previously, and will represent more than 2X growth in two years.
The I/O Fund first covered TPUs versus GPUs back in 2019 and revisited the topic in February 2024 in our analysis, Broadcom: Networking/ASICs Giant and the Second Largest by AI Revenue. Since then, we’ve provided quarterly coverage for two years.
If you want cutting-edge insights on AI stocks early in the cycle — including our take on Broadcom’s earnings this evening — sign up now.
Broadcom Stock’s AI Edge: Custom Silicon & Massive Hyperscaler Deals
Broadcom’s stock has been strong this year, outperforming the Nasdaq by nearly 50-points and SMH by 20-points based. This strong performance is partly due to custom accelerators that are often multiples cheaper than Nvidia’s GPUs for inference tasks and due to custom silicon is increasingly performant with each generation. By optimizing algorithms (software), Big Tech can drive higher performance from large language models -- which helps to drive down costs while also increasing output for specific workloads.
For example, a rough idea as to how much it costs Nvidia to make merchant GPUs is estimated around $3,000 to $6,000 whereas the company charges $30,000 to $40,000 – hence the AI leader’s excellent margins. Reducing Nvidia’s high pricing power is what Big Tech is after and this can be accomplished both in the hardware costs but also through optimizing the workloads for specific use cases – for comparison, Ironwood is expected to cost around $13,000 per chip.
Big Tech is prominent in Broadcom’s custom silicon customer list, which includes Google and Meta. ByteDance reportedly emerged as the third customer last summer. The company announced its fourth customer in FQ3 with a $10 billion XPU order. Hock Tan said in the FQ3 earnings call, “Last quarter, one of these prospects released production orders to Broadcom, and we have accordingly characterized them as a qualified customer for XPUs and, in fact, have secured over $10 billion of orders of AI racks based on our XPUs.”
Analysts are divided on who the fourth customer is. Susquehanna believes that Anthropic is the new customer of Broadcom and it is likely for the TPU design. Mizuho analyst Vijay Rakesh also believes Anthropic to be the fourth customer of Broadcom. In contrast, Citi believes the $10 billion customer Broadcom disclosed in its FQ3 earnings is likely xAI.
Furthermore, OpenAI and Broadcom announced in October a strategic collaboration to deploy 10 gigawatts of OpenAI-designed AI accelerators. OpenAI and Broadcom will co-develop systems that include accelerators and Ethernet solutions from Broadcom for scale-up and scale-out. Broadcom plans to deploy racks of AI accelerators and network systems starting in the second half of 2026 and completed by the end of 2029.
The OpenAI deal represents a substantial three-year revenue ramp for Broadcom stock and further solidifies its position in the AI silicon market. Citi estimates the deal with OpenAI could bring in $100 billion in sales and $8.00 in earnings per share over the next few years; however, Mizuho highlighted that the deal to deploy 10GW of OpenAI's custom ASIC, code named Titan, could be even larger at an estimated $150 billion to $200 billion deal over multiple years.
The enviable customer list is showing up in Broadcom’s results. This quarter, management guided Q4 AI revenue to $6.2 billion, which would represent ~19% sequential growth and eleven consecutive quarters of YoY growth.
Broadcom did not lay out a FY25 AI revenue target, yet FQ4 ending in October 2025 implies Broadcom is guiding for $19.9 billion in AI revenue for the year, up 63% YoY from $12.2 billion in FY24. Mizuho estimates that AI revenue will grow 103% YoY to $40.4 billion for the FY2026 and nearly double to $78 billion in FY2028. However, given the growing customer list, these estimates could prove to be too low.
Additionally, Hock Tan will be duly rewarded should AI revenue targets exceed current expectations. In September, Tan received a performance award of 610,251 shares of common stock as part of a recent contract extension. The award will fully vest if Broadcom reaches $90 billion in revenue from its AI products over any consecutive four-quarter period from FY2028 through FY2030. That award will double if Broadcom earns $105 billion in AI revenue and triple if revenue totals more than $120 billion. If Broadcom fails to hit $60 billion in AI revenue during the period, Tan will forfeit the entire award. This provides investors with a framework for upper targets for the bull case.
Broadcom (AVGO) AI Revenue Forecast: Projected to hit $40.4 billion in FY2026, driven by Google TPUs and custom silicon demand.
Source: Company IR/TheFly/Mizuho
Broadcom’s Tomahawk 6: The Ethernet Switch to Power 1 Million-Plus AI Clusters
Broadcom has been quite vocal about the industry’s path to 1-million-plus accelerator clusters, frequently reiterating how its three hyperscalers and now four “each race towards 1 million XPU clusters by the end of 2027.” This would be multiples larger than current deployments, with xAI’s Colossus supercluster expanding from 100K to 200K GPUs Today, these clusters are 10-20X larger than Ironwood’s 9,216 chip SuperPod, highlighting the depth of AI demand.
Broadcom has continuously re-emphasized this forecast as it represents two major growth opportunities for the company: significant growth in accelerator deployments with inference tailwinds, and even more growth in networking deployments to support these clusters.
The shift to Ethernet and away from Nvidia’s lock-in ecosystem of GPU + InfiniBand is benefiting Broadcom, with the industry pointing to rising Ethernet demand. Arista said that momentum for Ethernet “has really shifted in the last year” while Nvidia touted that its new Spectrum-X Ethernet is annualizing at $10 billion in revenue, or $2.5 billion quarterly.
The company is committed to remaining on the leading edge of networking with its Tomahawk 6 switch, the industry’s first 102.4 Tbps Ethernet switch. The next-gen switch doubled the bandwidth of its predecessor, while offering flexible deployment ability with 1,024 100G or 512 200G SerDes options, reducing switch count.
This raw performance upgrade paves the way for >100K to 1 million accelerator clusters by allowing larger leaf-spine fabrics to be constructed, while drawing less power and keeping latency low. Broadcom exec Ram Velaga said that the demand for the new switch is “unprecedented” with multiple >100K accelerator deployments “using Tomahawk 6 for both the scale-out and scale-up interconnect.”
When discussing Tomahawk 6, management points toward the flattening of the AI cluster as an important catalyst for this product, stating: “[...] Tomahawk 6 enables clusters of more than 100,000 AI accelerators to be deployed in just two tiers instead of three ... this flattening of the AI cluster is huge because it enables much better performance in training next-generation frontier models through a lower latency, higher bandwidth and lower power.” The two-tier topology also reduces complexity of cluster construction and reduces congestion choke points significantly, addressing another critical pain point of building larger and larger clusters.
Additionally, in terms of the AI networking opportunity, scale up is 5-10X more than scale out – setting up a nice trajectory as AI clusters grow. Oppenheimer analyst Rick Schafer highlighted that they expect next-gen Tomahawk6 volumes to ramp up in the second half of next year, providing added growth and gross margin boost.
Broadcom FQ4 Earnings Preview: AI Revenue Outlook & OpenAI Deals
- Revenue expected to grow by 24.2% YoY and adjusted EPS by 31.7%.
- AI Revenue outlook
- New customer announcements
- Update on AI Serviceable Market for 2027
Broadcom is expected to report FQ4 revenue of $17.46 billion, up 24.2% YoY and a 220-basis points acceleration from the 22% growth reported in FQ3. Adjusted EPS is expected to grow 31.7% YoY to $1.87.
Broadcom (AVGO) FQ4 revenue is expected to grow 24.2% YoY to $17.46 billion, driven by AI Revenue Acceleration.
Source: Company IR/Seeking Alpha
The company’s margins will be a key metric to watch in the upcoming report. Management has done an excellent job in maintaining strong margins. Broadcom has been able to reduce operating expenses through cost controls and operational efficiency. Management expects adjusted gross margins to be down 70 basis points sequentially to 77.7% in FQ4, primarily due to a higher mix of XPUs and wireless. However, they are expected to be up 80 basis points compared to the same period last year. The company’s operating leverage should help to compensate for any sequential weakness in gross margins due to the FQ4 product mix. Management adjusted EBITDA guide for FQ4 is 67%, flat sequentially and up 200 basis points YoY.
Analysts expect strong adjusted EPS growth in the coming years. Adjusted EPS is expected to grow 39.1% YoY to $9.39 in FY ending October 2026 and 35.6% YoY to $12.72 in FY2027. The strong expected EPS growth showcases operating leverage, successful VMware integration, the benefits of higher margin software revenue, and rising AI revenue.
During the last earnings call after winning the $10 billion XPU order from the new customer, Hock Tan said, “And reflecting this, we now expect the outlook for our fiscal 2026 AI revenue to improve significantly from what we had indicated last quarter.” We expect management to provide more details on the AI revenue outlook for FY2026. The Q4 management guide of $6.2 billion implies that Broadcom is guiding for $19.9 billion in AI revenue for FY2025, up 63% YoY from $12.2 billion in FY24. Analysts are pointing to 100% YoY growth in AI revenue in FY2026, with Mizuho estimating that AI revenue will grow 103% YoY to $40.4 billion.
According to a recent report by The Information, Broadcom is in discussion with Microsoft to co-develop custom silicon chips. Analysts will likely ask for more details on this and other customers such as the $10 billion XPU order mentioned during the FQ3 earnings call and the OpenAI deal announced in October. The OpenAI deal is also expected to provide a strong boost to the company’s bottom line as UBS expects “large-scale deployments are expected to ramp later, positioning EPS to reach about $13.50 in 2027 and potentially above $20 by 2028 as projects come fully online.” It highlights that the current consensus adjusted EPS estimates for FY2028 of $15.80 are very low, a 27% difference.
Hock Tan often references the AI Serviceable Market. We could expect Tan to provide an update for 2027 at the next earnings call, as the company has been adding new customers over the past year. Hock Tan had said during the FQ4 earnings call in December last year, “In 2027, we believe each of them plans to deploy 1 million XPU clusters across a single fabric. We expect this to represent an AI revenue Serviceable Addressable Market, or SAM, for XPUs and network in the range of $60 billion to $90 billion in fiscal 2027 alone.”
Conclusion:
This year, Broadcom stock has outperformed Nvidia’s stock despite the two being about $200 billion apart in AI revenue with Broadcom at $20 billion in AI revenue for FY2025 ending in October and Nvidia at $250 billion run rate in the quarter ending in Jan. Nvidia clearly has the scale for R&D purposes and the CUDA platform to help defend its lead. However, I’ve also argued inference will provide an opening for Broadcom and AMD to meaningfully compete on AI accelerators.
At the I/O Fund, when discussing Nvidia versus Broadcom, the answer is yes and yes. We look for fundamental strength, product positioning, supply chain signals, and numerous other proprietary criteria to help us determine if a stock is participating in the AI trend.
I won’t yank your chain by pretending investors must choose one or the other. In a widening market, leadership compounds at the top and radiates outward as exponential demand will lift the entire ecosystem – including a ripple effect for lesser-known AI networking and AI energy names.
As we move deeper into the second half of this AI-driven decade, the investors who stay focused on the bigger picture — rather than react to every speculative headline or force themselves into a false binary — will be the ones best positioned to capture the full opportunity of the AI Monetization Supercycle.
This year, my firm has 15 positions beating the Nasdaq YTD, up from ten positions last year – helping to cement the I/O Fund as one of the world’s leading AI portfolios. Our cumulative return of 210% over a five-year period would rank us #2 if we were a hedge fund and #5 if we were an ETF – notably, this strong cumulative return does not yet include our 2025 performance.
Get real-time trade alerts, weekly webinars and deep dives on lesser-known AI stocks in our Advanced tier. Learn more here
Please note: The I/O Fund conducts research and draws conclusions for the company’s portfolio. We then share that information with our readers and offer real-time trade notifications. This is not a guarantee of a stock’s performance and it is not financial advice. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis. Beth Kindig and the I/O Fund own shares in AVGO at the time of writing and may own stocks pictured in the charts.
Recommended Reading:
More To Explore
Newsletter
Broadcom Stock: The Silent Winner in the AI Monetization Supercycle
The AI accelerator market will inevitably widen beyond Nvidia’s GPUs - the keyword is widen. More players will sell more AI systems as the market expands, and that growth supports both the clear leade
Nvidia Stock and the AI Monetization Supercycle No One Is Pricing In
Two weeks ago, Nvidia blew the doors off with an earnings report that defies the company’s mega-cap scale. The long-awaited Blackwell and Blackwell Ultra architectures are shipping in volume, leading
I/O Fund Called the Bitcoin Selloff: What Liquidity & DXY Data Predict Next
In August, the I/O Fund warned that Bitcoin was entering a high-risk phase as global liquidity stalled, and sentiment patterns flashed caution. Since then, Bitcoin has fallen more than -35%. In this a
Why Nvidia Stock Could Reach a $20 Trillion Market Cap by 2030
The headline that Nvidia could reach a $20 trillion market cap by 2030 will trigger plenty of emotion — it sounds fantastical, full of hype, or like a prediction made far too early in the AI cycle. Ye
Big Tech’s $405B Bet: Why AI Stocks Are Set Up for a Strong 2026
AI accelerators such as GPUs and custom silicon need no introduction. Compute has led the AI boom; a trend so powerful, it is displacing the FAANGs of the last decade with Nvidia firmly the world’s mo
Market Cycles, Not Headlines: What History Says About the 2025 Rally and What Comes Next
Despite how it may seem, modern-day narratives rarely drive market swings. Tariffs, political headlines, niche trends like rare earth materials, or speculation about which company OpenAI partners with
Decoding the S&P 500: When Human Sentiment Meets Artificial Intelligence
Less than one-fifth of the U.S. economy is expanding, yet this small segment is growing at such a blistering pace—driven by AI-related spending—that it continues to hold up the rest of the economy. We
TSM Stock and the AI Bubble: 40%+ AI Accelerator Growth Fuels the Valuation Debate
Taiwan Semiconductor (NYSE: TSM) recently announced fiscal Q3 earnings, stating its longer-term AI revenue outlook is stronger than anticipated. The company reported record Q3 revenue of $33.1 billion
Micron Stock Up 120% YTD: What the HBM Memory Leader Plans for 2026
Micron’s stock is up 120% YTD – or 3X more YTD than AI heavyweight Nvidia. Recently, the high-bandwidth memory content that Micron supplies has increased 3.5X between GPU generations, leading to a qui
Palantir Stock Forecast 2025: Can PLTR Justify Its High Valuation?
Palantir leads the AI software pack in terms of strong earnings reports this past quarter as the company achieved significant milestones, the most impressive being US commercial revenue grew 93% YoY a
