Dolphin Research
2026.02.04 04:29

AMD (Trans): LT outlook unchanged; MI500 slated for 2027.

Below is Dolphin Research's Trans of AMD's Q4 2025 earnings call. For the earnings take, cf. 'All Thunder, Little Rain. When Will AMD Get Its Mojo Back?'

I. AMD call key takeaways:

1) Next-quarter guide: Revenue of 9.8bn (±0.3bn), including approx. 0.1bn from MI308 sales to China. Non-GAAP GPM around 55%.

DC to grow QoQ. Other segments to decline on seasonality.

2) DC guide: Server CPU will not see typical seasonality and will grow QoQ; AI GPU (incl. MI308) to grow QoQ. The 0.1bn MI308 is already offset in this quarter's COGS.

3) 2026 outlook: Expect meaningful growth in revenue and profit. DC to be driven mainly by MI455 ramp (mostly 'rack-scale' deliveries) with a sharp step-up from 2H26 into 2027.

4) Long-term outlook: Target 60%+ CAGR in DC revenue over the next 3–5 years (in line with the Nov-2025 view), aiming for AI revenue to reach tens of billions by 2027.

5) CPU markets: The overall server CPU TAM should grow at a strong double-digit rate in 2026. The PC TAM may edge down, and management expects H2 to run slightly below H1 on seasonality.

For AMD's CPU biz., server CPUs should grow for the full year. Even if the PC market softens, its PC biz. should still grow.

6) AI GPU progress: MI455 to ramp in 2H26; MI500 slated for 2027. Management does not expect supply to constrain the plan, prioritizing DC GPU and CPU shipments.

Big picture: the sticking point is slower AI GPU growth. Ex-MI308, we estimate AMD's other AI GPU revenue rose only ~0.15bn QoQ this quarter and ~0.1bn next quarter, implying MI355 mass production is underwhelming.

While investors focus on rack-scale MI455 in H2, MI355 shipments could reignite doubts on product strength. Even as hyperscalers like Meta lift capex, AMD did not raise its long-term growth view (unchanged vs. Nov-2025). Beyond flagging MI500 progress, management conveyed limited additional confidence on the call.

II.$AMD(AMD.US) print highlights

Guide:

  • 26Q1: Revenue of about 9.8bn (±0.3bn), including approx. 0.1bn MI308 sales to China. This frames the starting point for the year.
  • YoY growth of ~32%, driven by strength in DC, Client and Gaming, and modest growth in Embedded. Mix benefits should support the topline.
  • Down ~5% QoQ, reflecting seasonality in Client, Gaming and Embedded, partly offset by DC growth. This is consistent with typical 1Q patterns.
  • Non-GAAP GPM ~55%. Mix and scale are key drivers.
  • 2026 outlook: Expect meaningful rev. and profit growth, led by greater EPYC and Instinct adoption and share gains, plus Embedded returning to growth. Execution on ramps is the swing factor.
  • Financial targets: Reiterate the path to the Nov-2025 LT goals, including a 35%+ CAGR over the next 3–5 years and EPS north of 20 per year over the strategic cycle.

III. AMD call details

3.1 Management highlights

Data Center (CPU + AI GPU):

  • Q4 performance: Record 5.4bn revenue, up 39% YoY. Instinct GPUs and EPYC server CPUs were the primary drivers.
  • Server CPU (EPYC): 5th-gen EPYC adoption rose sharply, contributing over half of server revenue (and 50%+ of DC). 4th-gen EPYC remained solid.
  • Cloud and enterprise both hit records, with full-year share highs. Gains were broad-based across customers.
  • Public cloud instances on EPYC grew 50%+ YoY to nearly 1,600 in 2025. Enterprise on-prem EPYC deployments doubled.
  • Demand pipeline remains strong. The next-gen 'Venice' CPU has robust interest and is slated for later this year.

DC AI GPU (Instinct):

Q4 performance: Instinct GPU revenue hit a record on higher MI350 shipments. It also included MI308 sales to China.

Customer traction: 8 of the global top 10 AI companies are adopting Instinct. MI350 is deepening existing partnerships and adding new ones, as hyperscalers broaden supply.

Software ecosystem (ROCm): The stack keeps expanding, with millions of models available out of the box and added support for vertical models such as healthcare. An enterprise AI suite was launched to simplify deployment.

Future roadmap and outlook:

MI400 and Helios: Customer programs continue to scale. Beyond a multi-gen OpenAI collaboration to deploy 6 GW of Instinct GPUs, AMD is discussing large, multi-year rollouts with others. Helios + MI450 deployments start in 2H26.

Portfolio expansion: MI400 spans workloads, including MI455X with Helios for AI superclusters, MI430X for HPC and sovereign AI, and MI440X servers for enterprise. Multiple OEMs plan to launch Helios systems in 2026.

Next-gen: MI500 on CDNA6 with HBM4E is progressing and targeted for 2027, with a major AI performance jump. Execution here underpins the LT cadence.

LT goal: Confidence in sustaining 60%+ DC revenue growth over 3–5 years and scaling AI revenue to tens of billions by 2027. Mix should lift margins over time.

Client (PC CPU):

  • Record 3.1bn revenue, up 34% YoY. Desktop unit shipments set records for the 4th straight quarter.
  • Ryzen demand was strong through the holidays, topping global bestseller lists. Commercial PC adoption accelerated, with units up 40%+ YoY.
  • Launched Ryzen AI 400 mobile at CES. Laptops are already shipping, with a broad AMD AI PC lineup throughout the year.

Gaming:

Revenue of 843mn, up 50% YoY. Gaming GPU revenue grew on holiday demand for the Radeon RX 9000 series.

Outlook: Semi-custom SoC revenue to decline significantly by double digits in 2026. Next-gen Xbox (AMD SoC) is on track for 2027.

Embedded:

Q4 performance: Revenue of 950mn, up 3% YoY, returning to YoY growth. Orders stabilized across key verticals.

Growth indicators: 2025 design-win value hit a record 17bn, up nearly 20% YoY. Cumulative design-win backlog surpassed 50bn.

Product progress: Rolled out new embedded CPUs and 2nd-gen Versal AI Edge SoCs. The roadmap broadens the addressable market.

3.2 Q&A

Q: On the 2027 AI revenue outlook, and H2 demand for MI455 and the Helios platform, how are customer engagements progressing?

A: MI450 development is on track, with launches and production starting in H2. Customer programs are progressing well, and the OpenAI partnership is solid, with capacity ramps running from H2 through 2027.

We are also working closely with many other customers, who are eager to scale MI450 given its advantages. We see opportunities in both inference and training. As a result, we feel good about 2026 DC growth and are confident in reaching tens of billions of DC AI revenue in 2027.

Q: More color on the Mar-quarter guide and DC GPU growth for the year?

A: We guide one quarter at a time, but for Q1 specifically, while total revenue is down ~5% QoQ, DC actually grows. CPU, which normally declines high-single-digit on seasonality, is guided to grow QoQ.

DC GPU, including China, also grows QoQ. So the DC outlook is solid, while Client, Embedded and Gaming show seasonal declines.

For the year, we are very constructive. Two vectors drive DC: server CPU growth is very strong, with AI making CPUs even more critical and orders strengthening over recent quarters, especially the last 60 days.

Server CPU grows from Q4 to Q1 (normally a seasonal down quarter) and continues to grow through the year. On DC AI, 2025 was important, and 2026 is the real inflection: MI355 is performing well into 1H, but MI450 is the true inflection, with revenue starting in Q3 and stepping up sharply in Q4 into 2027.

Q: After Q1, what is the outlook for China MI308 sales, and can DC revenue grow 60%+ in 2026?

A: For China, we were pleased with some MI308 sales in Q4, tied to early-2025 orders and licenses. We expect about 0.1bn in Q1.

Given the dynamic environment, we are not forecasting more China revenue. We have filed for MI325 licenses and remain engaged with customers, but beyond the 0.1bn in Q1, we think it is prudent not to guide further.

On DC overall, we are very optimistic. EPYC (Turin and Genoa) continues to grow well, Venice later in the year should extend leadership, and MI450 ramps materially in 2H26.

We are not providing segment-level guides, but the 60%+ LT growth target is indeed achievable in 2026.

Q: On server CPU capacity, how quickly can you secure more from foundry partners like TSMC, and what does that imply for 2026 growth? Any pricing inflection?

A: We see the overall server CPU market growing at a strong double-digit rate in 2026. We have been increasing our server CPU supply over recent quarters in anticipation of this, which also supports our stronger Q1 server outlook.

We see the ability to grow through the year. Demand is clearly strong, and we are working with the supply chain to add capacity, so we are scaling supply to meet the opportunity.

Q: How should we think about full-year GPM given stronger server CPU and an accelerating GPU ramp in H2?

A: We were pleased with Q4 GPM, and Q1 is guided to 55%, up 130bps YoY, with MI355 scaling meaningfully YoY. We benefit from a favorable mix across all businesses.

In DC, we are scaling next-gen products, including MI355, which helps margins. In Client, we keep moving upmarket and are gaining in commercial, which continues to improve margins.

Embedded is recovering as well, contributing to GPM. These tailwinds should persist over the next few quarters, and as MI450 ramps into Q4, GPM will be mix-driven. We will provide more detail later, but we are very comfortable with margin progress in 2026.

Q: For MI455 ramp, will 100% be rack-scale? Will there be 8-GPU servers, and when is revenue recognized?

A: We do have multiple MI450 variants, including 8-GPU form factors. But for 2026, the vast majority will be rack-scale.

Yes, we recognize revenue upon shipment to the rack integrator. That aligns with our delivery model.

Q: Any risks in turning chips into racks, and are you pre-building racks to derisk the ramp?

A: Development is progressing well, with MI450 and Helios both on plan. We have done extensive testing at both silicon and rack levels, with strong customer feedback enabling parallel validation.

Everything remains on schedule for H2 releases. We remain confident in execution.

Q: Opex trajectory as GPU revenue scales. Do we get operating leverage, or does opex rise faster with AI?

A: We are confident in the roadmap, and in 2025 we increased opex alongside revenue, which was the right call. Into 2026, with significant growth expected, our LT model assumes opex grows slower than revenue.

We expect that to hold in 2026, especially as we see the H2 revenue inflection. Given FCF generation and growth, investing in opex remains the right move.

Q: Is the 0.1bn from China in Q1 also zero-cost like in Q4, and any GPM impact? Also, any specifics on 2025 Instinct scale?

A: For the 0.1bn in Q1, the 360mn inventory charge in Q4 covered not only Q4 China revenue but also MI308 shipments expected to generate the 0.1bn in Q1. So the Q1 GPM guide is clean.

On Instinct scale, we do not guide at the sub-segment level. But for modeling, even excluding China one-offs, DC AI grew from Q3 to Q4, which should help frame your estimates.

Q: Client was strong in Q4, but with DRAM cost inflation, any order pattern changes? How do you view 2026 Client growth and health?

A: 2025 was excellent for Client, with strong ASP uplift toward premium and unit growth. Into 2026, we are monitoring closely, and given commodity inflation including memory, the PC TAM may edge down, with H2 slightly below H1 on seasonality.

Even in a softer PC market, we believe our PC biz. can grow. The focus is enterprise, where we made solid progress in 2025 and expect continued premium mix gains in 2026.

Q: Competitors are working with SRAM-based architectures; what does that mean for HBM-based Instinct in inference, and how do you address low-latency inference?

A: This is a natural evolution of a maturing AI market. As inference scales, efficiency per dollar per token or per inference matters more, and our chiplet architecture allows deep optimization across training and inference phases.

We will see more workload-optimized solutions, via GPUs or ASIC-like designs. We have a full compute stack to address these needs, and we view inference as a major opportunity alongside training.

Q: On OpenAI, will the 6 GW, 3.5-year plan start on schedule in H2, and any more color on the partnership?

A: We are working closely with OpenAI and CSP partners to deliver MI450 and execute the capacity ramp starting in H2. MI450 is on track, Helios is progressing well, and we are co-developing deeply with partners.

We are optimistic about scaling MI450 with OpenAI. However, note that we have many other customers highly interested in MI450, and we are ramping with them in the same timeframe.

Q: x86 vs. ARM in server CPUs, especially for agents. Does x86 have an edge, and thoughts on Nvidia's ARM CPU?

A: Demand for high-performance CPUs is robust, especially for agent workloads that spawn substantial traditional CPU tasks, the vast majority of which run on x86 today. EPYC is workload-optimized, with best-in-class cloud and enterprise SKUs and cost-optimized options for storage and other uses.

All of these matter in building end-to-end AI infrastructure. CPUs remain critical as AI infrastructure scales, and as we said at the Nov Analyst Day, this is a multi-year CPU cycle, and we keep seeing it. EPYC is optimized across these workloads, and we will keep expanding share with customers.

Q: How far ahead do you procure HBM and other memory, a year or six months?

A: Given lead times for HBM, wafers and other components, we work with suppliers on multi-year horizons, covering demand forecasts, capacity adds and co-development. We are comfortable with supply-chain readiness.

We have planned these ramps for years, across both CPUs and GPUs, and are well prepared for substantial growth in 2026. Given tight supply, we have also entered multi-year agreements beyond that window.

Q: With changes like system accelerators, KV-cache offload and more discrete ASIC-style compute, will AMD follow these architectural shifts, and how do you view your system architecture evolution?

A: With a very flexible chiplet and platform architecture, we can deliver different system solutions for different needs. We recognize there will be diverse solutions rather than one-size-fits-all.

Rack-scale is excellent for the highest-end use cases like distributed inference and training. We also see opportunities in enterprise AI for other form factors, and we are investing across that spectrum.

Q: From MI300 to MI400 to MI500, how do margins evolve: up, down or volatile?

A: At a high level, each generation delivers more capability and memory, driving more customer value. Generally, margins should improve with each node.

At the start of a new generation, margins tend to be lower. As scale grows, yields and testing improve, and performance matures, margins improve within the generation, so over time you should expect higher margins gen over gen.

Q: How much will Gaming decline in 2026, and what is the annual trajectory?

A: 2026 is year seven of the current console cycle, when revenue typically declines. We do expect a significant double-digit drop in semi-custom revenue in 2026.

As next-gen products like Xbox ramp, we expect that downtrend to reverse. The timing aligns with the next cycle.

Q: Any supply constraints on rack systems in H2 that could limit revenue growth, especially from Q3 to Q4?

A: We have planned at the component level across the stack. We do not expect supply to constrain the DC AI ramp under the current plan, which is aggressive but achievable.

Given AMD's scale, our top priority is to ensure DC capacity ramps smoothly across GPUs and CPUs. That is where execution is focused.

Q: Biggest investment areas in 2025, and where will incremental opex go in 2026?

A: 2025 investments are centered on DC AI, including accelerator hardware roadmaps, expanding software, and the ZT Systems acquisition to enhance system-level solutions. We are also investing in go-to-market to support growth and scale commercial and enterprise for CPUs.

In 2026, we expect to keep investing actively, but revenue should outgrow opex, driving EPS higher. That operating leverage remains a key target.

Risk disclosure and disclaimer:Dolphin Research Disclaimer and General Disclosure