
AVGO (Trans): Anthropic adds another 10bn; 73bn order is the floor
Below is Dolphin Research's Trans of Broadcom's FY2025 Q4 earnings call. For our First Take on the print, see 'Broadcom: Challenging NVIDIA, but the vanguard blinked first?'.

Broadcom shares fell after-hours as management struck a cautious tone in the discussion that followed, including on AI order outlook and new customer/order progress. Key points covered new AI backlog, customer adds, and order timing.
1) AI orders: Total AI backlog across AI components and XPUs has topped $73 bn, to be delivered over the next 18 months (six quarters). Management clarified that $73 bn reflects orders on hand, i.e., a floor, and more orders are expected during the period.
2) Backlog mix: Of the $73 bn to be delivered over the next 18 months, roughly $20 bn is for non-XPU components, with the remainder for XPUs (approx. 72.6%).
2) Customer progress: ① Anthropic placed a $10 bn order last quarter and added another $11 bn this quarter, to be delivered by end-2026. ② A fifth XPU customer, with whom Broadcom has worked for years, placed a $1 bn order (delivery by end-2026).
3) FY2026 outlook: Infrastructure software revenue to grow at a low-teens pace. Non-AI will remain stable.
4) OpenAI collaboration: The 10GW program skews to 2027–2029 rather than 2026. It represents a directional agreement on the future path and will not contribute much in 2026.
Overall, investors had been very bullish on Broadcom's AI trajectory into next year and beyond, helped by the 'value-for-money' of Google's Gemini and TPUs. However, the six-quarter $73 bn order guide was a cold splash. Excluding Anthropic and the fifth customer, it implies Google's own needs are largely flat, which disappointed the market.
Management stressed that $73 bn is a floor and actual results should be better, but the number still pressured sentiment and the stock fell ~5% after hours. While this dents near-term confidence, Dolphin Research views Broadcom's tone as typically conservative and expects potential upward revisions to AI. Medium to long term, the 'Google Gemini + Broadcom' combo retains a clear cost-performance edge and should keep pressure on NVIDIA.
I. Broadcom core financials review
FY2025 results: Consolidated revenue reached a record $64 bn, up 24% YoY. AI revenue was $20 bn (+65% YoY), driving record semiconductor revenue of $37 bn; infrastructure software revenue was $27 bn (+26% YoY).
Adj. EBITDA was $43 bn, or 67% of revenue. FCF was $26.9 bn, up 39% YoY.
Q4 FY2025:
Total revenue hit a record $18 bn, up 28% YoY and beating estimates. Adj. EBITDA was $12.2 bn, up 34% YoY, with a 68% margin.
Semiconductor solutions revenue was $11.1 bn (+35% YoY). Infrastructure software revenue was $6.9 bn (+19% YoY).
Q1 FY2026 guide:
Consolidated revenue of approx. $19.1 bn (+28% YoY). Adj. EBITDA margin of approx. 67%.
II. Earnings call details
2.1 Management highlights
1. AI semis: Broadcom has delivered an 'order-of-magnitude growth' trajectory for 11 straight quarters. XPU revenue more than doubled YoY as customers accelerate deployments for LLM training.
- XPU: Demand is driven by LLM training use cases and has more than doubled YoY.
- On Google: the TPUs used to build Gemini are also used by Apple, Coherent and SSI for AI cloud compute, underscoring the scale of collaboration.
Order momentum:
- In FY2025 Q3, secured a $10 bn order for the latest TPU 'Ironwood' from Anthropic.
- This quarter, received an additional $11 bn order from the same customer, to be delivered by end-2026.
- Also won a fifth XPU customer via a $1 bn order, with deliveries by end-2026.
- Outlook: Expect AI spend to keep accelerating into FY2026. Guide AI semi revenue of $8.2 bn in Q1 FY2026, up ~100% YoY.
2. AI networking: strong demand, sizable backlog
- AI switch backlog has exceeded $10 bn.
- The 102 Tbps Tomahawk 6 switch, the world's first and only at this performance, is booking at a record pace.
- Booked record orders in optical components like DSPs and lasers, and in PCIe switches, with all products headed to AI data centers.
3. Overall AI scale and deliveries
- Combined AI components and XPUs have driven total AI backlog past $73 bn, nearly half of Broadcom's total consolidated backlog ($162 bn).
- Management expects the $73 bn to be delivered over the next 18 months.
4. Non-AI semis: Q4 benefited mainly from seasonal strength in wireless. Expect Q1 FY2026 non-AI semi revenue of about $4.1 bn, flat YoY, with sequential decline on wireless seasonality.
- Expect non-AI semi revenue to remain stable through FY2026.
5. Infrastructure software
- Growth driven by strong adoption of VMware Cloud Foundation (VCF).
- Q4 bookings remained robust with total TCV over $10.4 bn. Full-year infra software backlog reached $73 bn (vs. $49 bn a year ago).
- Guide Q1 FY2026 infra software revenue of about $6.8 bn, with renewal seasonality.
- Expect FY2026 infra software revenue to grow low-teens, still led by VMware.
6. Financials
- Operating efficiency: Q4 infra software GPM rose to 93% with OPM at 78%, reflecting completion of VMware integration.
- Raised Q1 FY2026 common dividend to $0.65 per share (+10% QoQ). Plan to maintain this quarterly payout through FY2026, implying a record $2.60 per share for the year (+10% YoY), marking the 15th straight year of annual dividend hikes.
- Buyback authorization extended, with $7.5 bn remaining through end of calendar 2026.
- Tax rate: expect non-GAAP tax rate to rise from 14% to ~16.5% in FY2026 due to global minimum tax and geographic mix.
2.2 Q&A
Q: With $73 bn of AI backlog to be delivered over 18 months, does that imply AI revenue of $50+ bn in FY2026? Also, how do you view the trend of customers insourcing ASICs and the impact on your XPU share at hyperscalers?
A: The $73 bn AI backlog spans XPUs, switches, DSPs and lasers, and is slated for delivery across the next 18 months. This reflects orders on hand only, and we expect more orders to be added. Ordering strength is broad-based across AI data center components, not just XPUs, with an unprecedented surge over the past three months, especially for Tomahawk 6, our fastest-deploying switch ever.
On XPU roadmaps and 'build-your-own' narratives, don't believe the hype. The notion that customers will broadly self-develop is overstated and practically unlikely. Building a custom AI accelerator takes years, and many LLM players have strong reasons to pursue it, as custom XPU designs materially outperform general-purpose GPUs; we see clear gains in power, training and inference across TPUs and other accelerators.
Q: As TPUs expand to more external customers, does that substitute for potential ASIC customers working with you, or does it enlarge the market? Any financial implications from your vantage point?
A: Most TPU users are adopting them as an alternative to GPUs, which is the most common case. Shifting from GPU to TPU is a transactional decision, while building a bespoke AI accelerator is a multi-year strategic decision; nothing stops those customers from continuing to invest toward the goal of creating and deploying their own custom accelerators.
Q: The $73 bn AI backlog to be delivered over six quarters reflects current orders. Given your lead times and the prospect of new orders, should we expect that deliverable figure to rise? Also, do you have sufficient foundry and materials commitments for 3nm/2nm wafers, packaging, substrates and HBM? What advanced packaging steps are addressed by the Singapore site?
A: The $73 bn is the on-hand backlog to be delivered over the next six quarters. Given product lead times of roughly six to 12 months, we expect additional orders to be slotted into those six quarters. Practically, view it as at least $73 bn of revenue over the next six quarters, with upside as new orders land.
On supply chain, particularly silicon and packaging, these remain critical challenges we actively manage. The Singapore facility is intended to internalize portions of advanced packaging given the scale, to enhance supply security and delivery reliability, not just cost. For wafers, we rely primarily on TSMC and continue to secure more 2nm/3nm capacity; there are no constraints today, though the future will be determined over time.
Q: On the initial $10 bn order, the follow-on, and the fifth customer, how will deliveries occur? Are you shipping XPUs or full racks? Also, how should we think about the compute and deliverables, and will non-Google customers replicate Google's networking or use yours?
A: This is essentially a system sale. Our AI systems include numerous components beyond the XPU and the customer accelerator, and for hyperscalers it increasingly makes sense to sell the full system and take responsibility for the rack. The market now understands this as a system sale.
For the fourth customer, we are indeed selling systems that include our key components. That is no different than selling chips, and we ensure the end system is operational as part of the sale.
Q: Will AI mix dilute GPM, especially as system sales ramp? Over the next 4–6 quarters, could GPM fall below 70%? And what about OPM given operating leverage?
A: The AI impact is not yet fully visible in reported financials, even though some system sales have begun. AI revenue carries a lower GPM than the rest of the portfolio, including software, but its growth will be very rapid, creating operating leverage that drives OP dollars higher.
So while headline GPM will start to decline, operating leverage should benefit us at the OPM level. Specifically, as we deliver more systems in the back half and pass through more third-party content, similar to memory on XPUs, GPM will come down. However, GP dollars will rise, and OP dollars will also increase, though margin as a percent of revenue will dip modestly; we will provide more specific guidance closer to year-end.
Q: Can you be more specific on FY2026 AI revenue? You said growth would accelerate from +65% in FY2025, and Q1 guidance is +100% YoY. Should Q1 be a starting point for the full-year growth rate, or somewhat lower? Also, can you confirm whether the $1 bn fifth-customer order is indeed from OpenAI?
A: The backlog is very dynamic and growing. Six months ago we suggested FY2026 AI could grow 60–70% YoY, and now Q1 is set to double. With the current $73 bn to be delivered over 18 months, which we expect to keep rising, FY2026 may turn into an accelerating year as it progresses.
Q: On the OpenAI contract, the program is expected to run through 2029 at 10GW, which I assume is the fifth customer. Do you still expect it to be a growth driver, what could impede it, and when do you expect revenue contribution?
A: It is the fifth customer, it is real, and it will grow. We have been collaborating with them on XPUs for years, and we will leave it at that.
As for the OpenAI item you mentioned, we agree it is a multi-year journey through 2029. As stated in our joint release, the 10GW scale is weighted to 2027–2029 rather than 2026, and is a directional agreement; we do not expect much contribution in 2026.
Q: How do you see Broadcom's custom silicon content growing by generation? With a competitor launching CPX for larger context windows, do the multiple XPU programs at the five customers expand the opportunity set?
A: Each of the five customers can develop parallel XPUs for training and inference, creating many variants and content. Custom accelerators often embed unique hardware functions, such as integrating a power/cost-efficient data router and dense matrix multipliers on a single die; even within a customer, memory capacity and bandwidth can vary by chip to fit inference, decoding and other workloads.
We are effectively building different hardware for different workload aspects. It is a highly diversified space, and each customer is pursuing multiple chips.
Q: First, you guided AI up about $1.7 bn QoQ. Is growth broad-based across the three existing customers, or concentrated in one? Second, a competitor just bought a photonic fabric company; is that disruptive or still early?
A: Growth is occurring and feels like a blend across existing customers and XPU programs. Specifically, we are seeing XPU-led growth alongside very strong demand for switches (both Tomahawk 6 and Tomahawk 5), our latest 1.6Tbps optical DSPs, and hence very strong demand for optical components and lasers. To frame the mix, of the $73 bn AI backlog scheduled over 18 months, roughly $20 bn is for non-XPU components, with the rest XPUs; that remaining $20 bn is still substantial.
Silicon photonics is a pathway to more efficient, lower-power interconnects for both horizontal and, eventually, vertical scaling. We have the technology and continue to develop from 400G to 800G and now 1.6T silicon photonic switches and interconnects. Engineers are still pushing copper for in-rack vertical and non-pluggable optics for horizontal, and we are prepared, but widescale adoption is not imminent.
Q: Can you discuss supply-chain resilience and visibility with key materials vendors, especially to support current programs and the two new custom compute processors announced this quarter? Given your large role across AI networking and compute and the record backlog, where are the bottlenecks, and how do you see them easing in 2026?
A: We run product technology and operations that manufacture multiple frontier components underpinning today's AI data centers. Our 1.6Tbps DSPs provide leading bandwidth for top-end XPUs and GPUs, complemented by lasers and other active components like EMLs and VCSELs.
Looking at AI racks and systems, the bottlenecks can be clear—and sometimes we are part of that bottleneck, which we are addressing. We feel good about the setup into 2026.
Q: First, to clarify, is the OpenAI agreement a general, potentially non-binding framework, similar to agreements with NVIDIA and AMD? Second, why will non-AI semi revenue be flat—are inventories still elevated, and what is needed for growth to resume?
A: On non-AI semis, broadband is indeed recovering well. Other areas are stable but we have not seen sustained strength, mostly because AI is absorbing spend from enterprises and hyperscalers elsewhere; things are not worsening, but aside from broadband, the recovery will not be quick.
Regarding OpenAI, without delving into specifics, the 10GW announcement speaks to that. Separately, our custom XPU program with them is in an advanced stage and moving quickly, and that broader effort will include committed elements; the 10GW announcement is an agreement to develop 10GW of capability for OpenAI over 2027–2029, distinct from the XPU program we are co-developing.
Q: If you have $21 bn of rack revenue in the back half of 2026, will that rate persist? Will you continue selling racks, and how might the mix shift over time? I am trying to understand what portion of the 18-month backlog is full-system sales today.
A: It depends on how much compute our customers will need beyond the next 18 months. Based on what we know, your guess is as good as ours; if they need more, it could continue and even scale further, and if not, then it will not. Our intent is to describe the demand we see over the current window.
Risk disclosure and statements: Dolphin Research disclaimer and general disclosure
