
AVGO Trans: AI revenue to top 100bn by 2027; XPU collaboration is sustainable---
Below is Dolphin Research's curated transcript from Broadcom's (AVGO) FY2026 Q1 earnings call. For our earnings take, see:《AVGO: AI Fully Loaded; A Sharper Rival to Nvidia?》.
$Broadcom(AVGO.US) Key takeaways:
1) AI biz. (i) AI revenue grew 106% YoY in Q1, and Q2 guidance implies an acceleration to +140% YoY to $10.7bn. (ii) AI Networking grew 60% YoY in Q1, accounting for one-third of AI revenue; Q2 is expected to jump to 40% of AI revenue (approx. $4.2bn). Longer term, AI networking components should be ~33%–40% of AI revenue.
2027 AI outlook: chip-only AI revenue (incl. ASICs and switch silicon) to exceed $100bn in 2027.
2) AI customer pipeline: six deep, strategic customers, adding a 6th, OpenAI, which moved from a framework agreement to a substantive customer. The portfolio has broadened and deepened. Engagements span multi-year ASIC/XPU programs.
(i) Google: growth to continue in 2026, with strong demand for 7th-gen Ironwood TPUs. Demand for the next-gen TPU is expected to be even stronger in 2027 and beyond. This underscores sustained momentum.
(ii) Anthropic: the 1 GW TPU compute deployment in 2026 is off to a solid start. Demand is expected to surge to >3 GW in 2027. Scale-up plans are intact.
(iii) Meta: the custom accelerator MTIA roadmap is alive and well, with shipments underway. The next-gen XPU is expected to scale to multi-GW in 2027 and beyond. Ramp remains on track.
(iv) 4th and 5th customers: strong shipments in 2026, with 2027 expected to more than double. Visibility is improving. Growth vectors are diversified.
(v) OpenAI (6th): expected to deploy its first-gen XPU at scale in 2027, exceeding 1 GW of compute. This adds a new growth leg. Program milestones are defined.
3) On in-house chips: any hyperscaler or LLM developer aiming to go fully in-house faces major hurdles, requiring top-tier chip design teams, cutting-edge SERDES, advanced packaging, and clustered networking capabilities. Competition from self-build will not materialize for many years. It will eventually come, but the road is long.
4) On prefill vs. decode trends: XPUs will become the mainstream choice, offering flexibility to design for specific workloads. Some designs will tilt to prefill; others to RLHF post-training or test-time scaling. This enables workload-optimized architectures.
5) GPM: not pressured by higher AI shipments. Yields and costs are at levels where AI economics align with the broader semi biz. Margins should be consistent.
6) Market noise: chatter about alternatives exists, but customers plan long term, and AVGO is embedded in their strategic roadmaps. XPU is a strategic, durable franchise with the six customers. Partnership depth mitigates rotation risk.
Overall, results were solid, with AI now in an acceleration phase. Prior concerns centered on in-house efforts or alternatives and potential GPM pressure. Mgmt addressed these: limited margin impact, durable partnerships, and 'alternatives' seen as noise. Execution remains the focus.
Mgmt also gave a clear 2027 AI outlook, boosting near-term confidence. That said, Google’s TPUv8 also engages MediaTek in a dual-track, and despite confidence in partnerships, large customers probing alternatives can still weigh on sentiment and the multiple. This bear point will persist.
I. AVGO earnings: core metrics recap
By segment
Semiconductor Solutions: revenue $12.5bn (65% of total), +52% YoY. GPM approx. 68%. Growth was AI-led.
- AI semis (core engine): revenue $8.4bn, +106% YoY.
- Non-AI semis: revenue $4.1bn, flat YoY.
Infrastructure Software: revenue $6.8bn (35% of total), +1% YoY. GPM 93%. VMware was strong, with TCV bookings >$9.2bn and ARR +19% YoY.
II. AVGO earnings call details
2.1 Mgmt highlights
AI biz
Momentum and acceleration: AI is the main growth driver, and it is accelerating. AI revenue rose 106% YoY in Q1, beating estimates, and Q2 guides to +140% YoY. Visibility has improved.
AI Networking outperformance: Q1 AI Networking revenue rose 60% YoY, accounting for one-third of AI revenue. Q2 is expected to jump to 40%, indicating share gains in this arena. Execution is strong.
Tech leadership: in scale-out, the first-to-market Tomahawk 6 switch (100 Tbps) and 200G SERDES are winning hyperscalers. The next-gen Tomahawk will double performance. Roadmap remains ahead.
Connectivity edge: in scale-up, 200G SERDES enables customers to keep lower-cost, lower-power DACs, avoiding an early move to optics. This advantage extends into a 400G SERDES upgrade through 2028. Capex and power benefits are material.
Custom accelerators (XPU / ASIC)
Customers and scale: there are six deep, strategic customers. The five existing programs are ramping well, and OpenAI was added as the 6th. Multi-year engagements are in flight.
Customer progress:
- Google: growth continues in 2026, with strong demand for 7th-gen Ironwood TPU. Next-gen TPU demand looks even stronger in 2027+.
- Anthropic: 1 GW TPU compute deployment in 2026 is off to a good start; 2027 demand expected to surge to >3 GW.
- Meta: MTIA roadmap is intact, with shipments underway; next-gen XPU expected to scale to multi-GW in 2027+.
- 4th and 5th customers: strong shipments in 2026; 2027 expected to more than double.
- OpenAI (6th): first-gen XPU to deploy at scale in 2027, exceeding 1 GW of compute.
Partnerships and supply assurance: engagements are deep, strategic, and multi-year, and AVGO provides end-to-end capabilities from silicon design and process to packaging and networking. Multi-year supply agreements are in place, with wafers, HBM, and substrates fully secured for 2026–2028. Supply durability is ensured.
FY2026 Q2 guide
Consol. revenue: approx. $22.0bn, +47% YoY, with growth accelerating. Demand breadth is widening. Strength is AI-led.
Semi revenue: approx. $14.8bn, +76% YoY. Within that, AI revenue to accelerate sharply to $10.7bn, +~140% YoY. Core drivers remain intact.
Infra software revenue: approx. $7.2bn, +9% YoY. Mix benefits persist. Execution remains steady.
Profitability: consolidated GPM ~77% (flat QoQ); Adj. EBITDA ~68% of revenue. Margin profile remains resilient. Unit economics are stable.
2027 outlook: given strong momentum, visibility has improved markedly. Programs and supply are aligned. Confidence has increased.
Revenue target: clear line-of-sight for chip-only AI revenue (incl. ASICs and switch silicon) to exceed $100bn in 2027, with the required supply chain secured. Program ramps are synchronized. Customer commitments underpin visibility.
Non-AI semis
- Q1 revenue $4.1bn, flat YoY and in line. Strength in enterprise networking, broadband, and server & storage was offset by seasonal wireless.
- Q2 revenue guided to ~$4.1bn, +4% YoY.
6. VMware: mgmt emphasized this biz is not being disrupted by AI. Instead, VMware Cloud Foundation (VCF) is seen as the necessary abstraction layer in the DC that unifies CPU, GPU, storage, and networking into a high-performance private cloud for enterprise AI workloads. GenAI and agentic AI should drive more demand for VMware.
2.2 Q&A
Q: On the >$100bn, to confirm, you mean AI chips, and can you clarify ASIC vs. networking and how rack revenue factors in? The biggest concern is that despite AI roughly doubling QoQ, hyperscalers need ROI this year or next. How do you view this caution, and how is it embedded in your outlook?
A: In recent months, we’ve seen demand concentrated in a few players, some hyperscalers and some not. The common thread is building LLMs, productizing them, and creating platforms for enterprise, code assist, agentic AI, or consumer subscriptions. These few potential customers, many now customers, are creating GenAI or agentic AI platforms.
Across these 5–6 customers, training compute demand remains strong. Interestingly, inference demand to productize and monetize LLMs is driving a large amount of compute, which benefits us as these customers build custom accelerators and the clustered networking to connect them. We therefore expect demand to re-accelerate.
On your first part, my 2027 forecast well above $100bn is essentially chip-based content, whether XPU or switch silicon DSPs. We are speaking to silicon content. That is the scope.
Q: There’s been much talk about COT (customer-owned tools) for in-house XPU/TPU. AVGO has led ASICs for 30 years, and COT efforts have rarely succeeded, with some current COT projects lagging by 2x on performance, design complexity, packaging, and IP. First, based on your visibility into next year, do you see COT taking any meaningful TPU/XPU share from AVGO? Second, with AVGO’s TPU/XPU 12–18 months ahead on performance, complexity, and IP, how do you extend the lead?
A: I emphasized in my opening that any hyperscaler or LLM developer trying to go fully COT faces huge challenges. Technically, you need world-class silicon capability for the XPU that optimizes LLM inference, spanning top design talent, cutting-edge SERDES, very advanced packaging, and critically, clustered networking. We’ve done this for 20+ years, including in today’s GenAI.
If you are an LLM player, ‘good enough’ won’t cut it; you need the best chips to compete with other LLMs and with Nvidia, which improves every generation. To build a global platform, your chips must be as good as or better than Nvidia and everyone else, which requires a partner with the best tech, IP, and execution. Humbly, we’re far ahead today.
We do not expect COT competition for many years. It will come eventually, but there’s a long way to go, and the race continues. Also unique to us: once you design silicon, you must ramp to high-volume production fast. Designing a lab chip is one thing; producing 100k units at acceptable yields and cost quickly is another, and few can do that.
Q: You highlighted networking differentiation more than before. Near term, what drives AI Networking to 40% of AI revenue? Long term, within the >$100bn AI revenue, does that mix shift? How do you sustain leadership in scale-out and scale-up, and does networking leadership help XPU through co-optimization?
A: First, in networking, with new-gen GPUs and XPUs, we are at 200G SERDES. Tomahawk, launched ~9 months ago, is the only 100T switch in market, and hyperscalers want the best network and biggest bandwidth for clusters, so demand is huge. On top, in scale-out, we provide the only 1.6T optical DSPs, and this combination is growing networking even faster than the already strong XPU.
I expect this to stabilize at some point, but we won’t slow down. In 2027, we plan to launch Tomahawk 7 at 2x performance, likely first and clearly ahead, sustaining momentum. Ultimately, I expect AI networking to be ~33%–40% of total AI revenue in any given quarter.
Q: How do you see prefill and decode decoupling from the GPU ecosystem, and what does it imply for custom chips? Any structural shifts between GPUs and customer XPUs?
A: You’re asking how accelerator architectures evolve as workloads evolve. The one-size-fits-all general GPU goes only so far; for example, GPUs are built for dense matmul, while MoE can be done in software kernels but is less efficient than silicon tailored to MoE. Inference follows the same logic.
This is leading to XPU designs more specific to each LLM customer’s workloads, diverging from standard GPU designs. As we’ve said, XPUs will become mainstream for flexibility, enabling workload-optimized designs—some better for prefill, others for RLHF post-training or test-time scaling. You tune the XPU to the LLM workload, and we see this roadmap at all five established customers.
Q: On GPM puts and takes, as you ship rack-scale systems, mix could dilute margins. Any guideposts? Racks seem ~45%–50% GPM, so should we expect ~500bps GPM headwind as racks ship, and is there a floor below which you won’t do more racks?
A: That’s a misconception. Our GPM remains at the levels Kirsten reported. We don’t see margin pressure from higher AI shipments, as yields and costs are at levels where AI resembles our broader semi model. Even vs. last quarter’s commentary, the structural impact looks negligible.
Q: On the ‘well above $100bn’ next year, I estimate 8–9 GW: Anthropic ~3 GW, OpenAI ~1 GW, Meta multi-GW, Google at least 3 GW, plus others. I recall ~$20bn per GW of content. Are my GW estimates for 2027 in the right ballpark, and how should we think about per-GW content at shipment to get to ‘well above’ $100bn?
A: You’re right to think in GW, as that’s how we sell chips. Per-GW dollar content varies by LLM customer and can differ meaningfully. But you’re directionally right that the math gets you to a much larger dollar figure.
For 2027 GW, we expect to be close to 10 GW. Mix and content per GW will drive the revenue outcome. That is the framework.
Q: On visibility for four key components through 2028: how did you do that, as you may be first to secure to 2028? And after the step-up in 2027 AI, do you have sufficient visibility to continue significant growth in 2028 based on supply?
A: Yes. We anticipated the sharp acceleration and moved early to secure T-glass and substrates with strong partners on these critical components. Foresight and the right partners made it possible.
As Hock noted, we build custom silicon for six customers with deep, strategic, multi-year engagements. Because of that customization, customers share 2–3 years of plans, sometimes 4 years. That’s why we secure supply across the chain.
Securing it requires investments with partners, not just for more capacity but the right technologies and capacity. We therefore must lock supply years ahead. You may be right—we are likely the first to secure through 2028 and beyond.
Q: Based on your supply, can you grow in 2028?
A: Yes, we can grow in 2028. Visibility and secured capacity support continued expansion. That is our plan.
Q: For the Anthropic program around ~$20bn per GW this year, what’s chip vs. rack? When you say $100bn is chips, how do you delineate chip vs. rack programs? Also, AI is shifting from a single large exclusive to multiple customers using multiple vendors; how do you gain share visibility and confidence in a more distributed base?
A: As Charlie noted, only a handful of customers—specifically six—drive our revenue. You need to understand each customer’s spend and how critical these programs are, which is why I emphasize custom accelerators. These are strategic, not optional.
They won’t stop; each customer knows how to position custom chips along the LLM trajectory and how to develop inference for productization. We have very clear visibility there. By contrast, GPU-based activity is transactional and optional.
While it may look chaotic from the outside, it isn’t for us. These customers are strategic and goal-oriented, with clear capacity plans each year. Their only question is whether we can move faster, as the roadmap is set.
Q: For Anthropic specifically, how do you split racks vs. chips?
A: I’d prefer not to break that out. As Kirsten said, our revenue and margin profile is in good shape. That remains the case.
Q: You mentioned customers will use DAC through 400G SERDES. Why call that out, especially as a CPO pioneer?
A: I’m highlighting that our networking tech is uniquely positioned to help customers—even those using general GPUs, not just XPUs. If you’re building LLM DCs and we’re helping architect them, you want larger domains/clusters with the most direct XPU-to-XPU connections. DAC offers the lowest latency, power, and cost.
So you want to stay on copper as long as possible, especially for scale-up within the rack domain. For scale-out, optics are fine. With our tech, especially XPU/GPU-to-GPU connections, we can do this on copper at 100G, 200G, and even 400G.
We already have 400G SERDES that can drive in-rack copper distances. Even as a CPO leader, you don’t need to chase CPO now; it will come in due time—not this year, maybe not next, but when it’s ready. We’ll lead there as well.
Q: As you add more customers, many co-designing ASICs with you may use scale-up Ethernet. Can you discuss scale-up protocols and how you see Ethernet evolving there?
Ethernet has become the de facto standard for cloud over the past two decades. Two years ago, there was debate on which protocol to use for scale-out back-end networks to achieve the needed latency and scale; 24 months back it wasn’t clear industry-wide, but we knew the answer. Through deep partnerships, it became clear industry-wide that Ethernet is the preferred scale-out for both GPU and XPU.
Now for scale-up, as with 3–4 years ago, what’s the right answer? We consistently hear and see Ethernet as the right choice, and last year we announced with several hyperscalers and semi peers that Ethernet scale-up is correct. We believe it will happen, and many XPU designs we’re building require Ethernet for scale-up, which we’re glad to enable.
Q: Progress on fully custom XPUs beyond TPUs. Looking to next year, are these primarily inference-focused, and can you qualitatively compare performance or cost vs. GPUs to justify the scale customers are forecasting?
A: Most customers start with inference because it’s the easiest on-ramp. When you can do the job with a more efficient custom inference XPU at lower cost and power, you don’t need a general large dense-matmul GPU. That’s how customers tend to begin.
Now they are also doing training, and many XPUs serve both training and inference, which are interchangeable. We also see more mature customers developing two chips per year—one specialized for training and one for inference. You level up LLM intelligence via training and then productize via inference.
Productizing inference takes at least a year, during which others may build better LLMs. As you train the next-gen, you must invest in inference—chips and capacity—in parallel. As these six customers mature in that pursuit, our visibility improves.
Q: Over the last 1–2 quarters, what changed in your visibility to give more detail? On specific customers, you mentioned OpenAI at >1 GW in 2027, while reports suggest 10 GW by 2029, implying a steep ramp in 2028. Is that the right way to think about it, and was that always the plan?
A: Yes. In this GenAI ‘process’—a competition among a few players—each is trying to build better, more targeted LLMs. That requires ongoing training to improve models and inference to productize and monetize.
We’ve worked with some for over two years. As they gain confidence that our XPUs hit their objectives—on software and algorithm needs—they recognize these are the chips they want, which improves our visibility. It helps that we only need to work with six customers, all of whom view XPU and AI strategically across multiple generations and years.
Despite market noise on options, they think long term about deploying what we co-develop, to build better LLMs and, importantly, to monetize. We are part of their strategic roadmap. This is not a transactional choice of GPU or cloud; it’s a long-term investment.
Being on their long-term—not transactional—roadmap is key. As noted earlier, there’s a lot of short-term trading noise in the market, whereas our biz and products are positioned for long-term strategy. In short, the XPU franchise is strategic and sustainable across our six customers today.
Risk disclosure and statement: Dolphin Research Disclaimer and General Disclosure
