
ARM (Trans): AI CPU to begin contributing revenue by end of next FY; long-term targets unchanged
Below is Dolphin Research’s transcript of ARM FY26 Q4 earnings call; for the earnings recap, please see'ARM: Short-term Noise vs. Long-term AI Premium?'.
I. $Arm(ARM.US) Key takeaways from the print
1. This quarter (FQ4 FY26): revenue of $1.49bn (+20% YoY), a record high and above the guide midpoint. License was $819mn (+29% YoY) and royalty was $671mn (+11% YoY). Non-GAAP OPM was ~49%, with non-GAAP OP of $731mn and non-GAAP EPS of $0.60, both record highs. Non-GAAP OpEx was $734mn (+30% YoY, mainly R&D), about $10mn below guidance; ACV rose 22% YoY.
2. Full year (FY26): revenue hit a record $4.92bn (+23% YoY), marking a third straight year of 20%+ growth. Royalty was $2.61bn (+21%) and license was $2.31bn (+25%). Non-GAAP EPS reached $1.77, a record.
3. Next quarter guide (FQ1 FY27): revenue of $1.26bn ± $50mn (midpoint +20% YoY). Both royalty and license are expected to grow ~20% YoY. Non-GAAP OpEx ~ $760mn; non-GAAP EPS of $0.40 ± $0.04.
4. FY27 outlook: royalty growth to hold around ~20% for the year with limited intra-quarter volatility; license to be back-half weighted at roughly 60%/40% (H2/H1). OpEx to rise modestly QoQ by only a few percentage points; operating leverage to re-emerge in H2, ending the year with OpEx growth below revenue growth.
5. Long-term targets (FY31): reiterate goals from the Arm Everywhere event — $15bn AGI CPU revenue and $10bn IP revenue, totaling $25bn; EPS to exceed $9. At that point, IP biz operating/EBITDA margin ~65%, and chip biz ~35%.
II. ARM earnings call detail
2.1 Prepared remarks
1. Strategy and positioning
a. AI is shifting from human query-based workloads to always-on agentic workloads. The CPU’s role in the data center is expanding meaningfully as it coordinates tasks, moves data, manages memory, enforces security, and orchestrates around accelerators. b. Management sees agentic AI at scale driving data center CPU demand to 4x+ today’s level, implying a $100bn+ data center CPU TAM by 2030; the CEO expects Arm-based CPUs to become the largest CPU category by share by the end of the decade.
c. Arm is running a three-track strategy: drive royalty growth via IP and CSS, add silicon (AGI CPU systems/chips) as a new growth vector, and extend into next-gen AI workloads through a unified compute platform and software ecosystem. 2. Data center / Cloud AI (core growth engine)
a. Data center royalty has more than doubled YoY for multiple consecutive quarters, with Cloud AI the biggest contributor this quarter. Arm’s share in data center networking silicon (DPU, SmartNIC) is near 100%. b. Arm’s compute share across top hyperscalers is ~50% and rising, supported by Neoverse CSS.
c. Customer updates: at Google Cloud Next, both TPU (training) and TPU 8 (inference) replaced x86 hosts with in-house Axion Arm CPUs, delivering 80% higher performance vs. prior-gen x86 and 50% lower power. AWS continues to scale Graviton with Trainium/Nitro; Microsoft is advancing Arm-based Cobalt; NVIDIA introduced next-gen Arm-based Vera CPU at GTC and showcased a rack with 256 Veras.
d. The CEO expects NVIDIA accelerator platforms, Google TPU platforms, and AWS Trainium platforms to be Arm-led (vast majority). The trend toward all-Arm companion CPUs for the three major accelerator stacks is underway.
3. AGI CPU (new growth vector)
a. Designed for agentic AI, the first data center production system delivers 2x rack-level performance vs. x86, with AI DC capex savings up to $10bn per GW. The first product has 136 cores, and the CEO expects a path toward 256/512 cores. b. Meta is the lead launch partner and co-developer, building a cross-generation roadmap around 'personal super intelligence' for 3bn+ users. Customers can choose IP, CSS, or turnkey silicon, all on one compute platform and a single software stack.
c. Demand far exceeded expectations: vs. the $1bn demand disclosed at the event, FY27+FY28 locked-in demand is now $2bn+, while the company maintains the $15bn long-term goal. Initial production silicon revenue is expected in FY27 Q4; due to wafer/memory/packaging/test constraints, the initial $1bn guide is unchanged.
d. Customer traction: SAP is moving core DB and biz apps to Arm (starting with AWS Graviton, then expanding to AGI CPU). Cloudflare will deploy Arm across its global network for traffic management, security, and proximate AI inference; design wins also include F5 and SK Telecom in network infra. Accelerator vendors including Cerebras, OpenAI, Rebellions, and Positron are using AGI CPU as head nodes.
e. 50+ ecosystem partners publicly support Arm’s entry into silicon, including EDA (Synopsys, Cadence), foundry/manufacturing (Samsung, TSMC), ODMs (Super Micro, Lenovo, ASRock), and hyperscalers (AWS, Microsoft, Google, NVIDIA). All key licensees were briefed pre-event and offered 100% support.
4. Edge AI (smartphones, etc.)
a. Smartphone royalty kept growing despite a soft end market, driven by rising royalty rate as Armv9 and CSS penetrate high-end models. b. Smartphone unit growth turned negative last quarter; the CFO expects the broader mobile market to be flat to slightly negative, with pressure at the low end and limited impact on Arm.
5. Physical AI / Auto / Edge
a. ADAS/autonomous silicon based on Arm continues secular growth; auto revenue maintains double-digit growth with share gains. b. AI workloads will extend to phones, PCs, autos, factories, robots, cameras, sensors, and IoT devices; cumulative Arm-based chip shipments exceed 350bn with 22mn+ developers.
6. License momentum
a. License & other revenue was $819mn (+29% YoY), supported by next-gen architecture demand and deeper strategic engagements with key accounts. b. Signed a long-term strategic partnership with the Indonesian Gov. to bolster its AI development capabilities.
c. Closed two next-gen CSS licenses in the quarter — one for smartphone SoCs and one for data center networking. d. The SoftBank technology license and design services agreement contributed $200mn this quarter, flat QoQ.
2.2 Q&A
Q: In just 6 weeks post-event, AGI CPU demand rose from $1bn to $2bn+. Is the upside mainly existing customers increasing orders or new customers/new use cases? Also, with demand doubling but no guide raise, how is supply ramp progressing with foundry and memory partners?
A (CEO Rene Haas): It is both — named customers from Arm Everywhere increased forecasts, and new customers we did not name expressed strong interest for rapid deployment. A compelling path is buying turnkey racks from Super Micro, Lenovo, or ASRock for fast ordering and deployment. Many new customers already run Arm, either in-house or on Arm-based cloud instances, so software ports are largely done; moving from software-ready to racks in the DC is near frictionless. The $1bn demand we disclosed at end-Mar is fully covered by secured supply — wafer, memory, packaging, and test; the team is now working 24/7 with partners to land incremental supply to support the $2bn demand.
A (CFO Jason Child): From a guide standpoint, at Arm Everywhere we suggested assuming ~$90–100mn of silicon revenue within one or two quarters in FY27 — we maintain that target. As FY27 progresses, we will update on supply milestones; by FY27 Q3 we will provide clearer numbers for Q4 and an initial view for FY28.
Q: How do you break down the drivers of royalty rate growth for FY27 Q1 and the full year? DC is clearly accelerating, but what about consumer electronics/smartphones?
A (CFO): Last quarter, MediaTek’s strong 4nm ramp in the year-ago period did not repeat, so near-term royalty growth moderated; per guide, Q1 should return to ~20%. We assume mobile shipments turned negative in Q4 and should stay flat to slightly negative, with weakness concentrated in the low end and limited impact on Arm; any softness on handsets will be more than offset by Cloud AI/DC demand, resulting in net upside.
AWS and Google announcements on Arm-based deployments show acceleration. All three GPU camps — NVIDIA Vera/Grace, Google TPU with Axion 2, and AWS Trainium with next-gen Graviton — are aligned with Arm, offering sustained upside through the year, with timing subject to customer disclosures. Auto remains double-digit with share gains; the trend is intact. We are confident in the full-year royalty outlook.
Q: AMD cited a $120bn CPU TAM by 2030, above your $100bn, and also targets 50% share, while hyperscalers have Graviton, Axion, Vera, etc. What is Arm’s 'natural position' here, and whose share do you take?
A (CEO): On Mar 24 we were first to frame the TAM at ~$100bn, and it is positive that the market now converges — $120bn is plausible. It is not just more CPUs — core counts per CPU are rising fast since many agents prefer a dedicated core or batch; hence AGI CPU starts at 136 cores, ahead of some peers, and I can easily see 256/512-core designs. At high core counts, per-core efficiency is the real contest — Arm’s sweet spot. On share, AMD says 50%, Intel says 50%, and Arm also ~50% — obviously not all true at once. What I can say is that current hyperscalers — NVIDIA, Amazon, Google — attach their accelerators (TPU, Trainium, from Blackwell to Rubin) to Arm, trending toward near-100% Arm on companion CPUs.
There is also a segment — Cloudflare, Meta, SAP, SK Telecom, OpenAI — that will not design Arm CPUs in-house due to capex or engineering constraints; that is naturally ours. AWS selling Graviton externally is another signal of Arm compute scarcity. We coexist with partners without crowding each other, and the market is big enough for both; I am confident Arm will be the largest CPU category by share by decade-end.
Q: When you say '100% Arm' for accelerator-attached CPUs, do you mean a 100% attach rate? Also, as you go merchant, will OpEx rise for customer support?
A (CEO): To clarify, over time the vast majority of CPUs attached to Trainium, TPU, and NVIDIA accelerators will be Arm. NVIDIA is essentially there; AWS Graviton has advanced materially in recent quarters; and Google announced Axion for both TPU training and inference at Cloud Next — the trend is happening. The reason is better rack-level performance at the same power, with Google citing 80% overall performance uplift. On OpEx and customer support, the Arm-based rack model mirrors prior ODM collaborations — ODMs build the rack, customers own app software, and we support silicon-level code, firmware, boot, etc., while closing hardware issues end-to-end.
This support headcount is already included in our disclosed hiring plans and will not add extra expense. A (Unknown Executive): Customer support OpEx is embedded in the Arm Everywhere numbers and in our long-term guide — no incremental accruals.
Q: How should we think about quarterly cadence for FY27 royalty and OpEx?
A (CFO): Royalty should run roughly ~20% for the year with only modest quarterly variance; license similarly, but will be back-half weighted at ~60%/40% (H2/H1), consistent with the past three years. On OpEx, we previously expected a larger step-up from Q4 to Q1, which proved smaller than planned; going forward, OpEx should tick up a few percent QoQ, with operating leverage building through the year. By year-end, OpEx growth should be below revenue growth, returning to the 'incremental operating leverage' profile we had pre the recent investment ramp.
Q: As inference becomes the main AGI CPU use case, when do CPU/GPU approach 1:1, and how do you size head-node vs. host-node opportunities?
A (CEO): A static view is misleading. By chip counts, CPUs may not exceed GPUs, but by core counts, they almost certainly will. Accelerators like Blackwell/Rubin are at or near reticle limits, making it hard to grow GPU counts much further; AGI CPU is at 136 cores and Vera at 88 cores today, with potential to double or quadruple. Thus, even with similar chip counts, the CPU cores:GPU cores ratio will rise materially.
The upside is less about head nodes (constrained by GPU architecture and CPU-GPU interconnect) and more about dedicated CPU racks — tens to hundreds of CPUs orchestrating agentic workloads. For example, NVIDIA’s Vera Rubin pairing uses a 200kW liquid-cooled rack with 256 Vera CPUs at 88 cores each, placed adjacent to Vera Rubin — you might interleave a Vera-only CPU rack between two Vera Rubin racks, changing the ratio. Our CPU demand view may still be conservative — 4x could be a floor. Ratios are the wrong lens because cores per CPU are exploding, pushing ASPs up and TAM higher.
Q: By entering silicon, you will compete with existing IP customers who also sell Arm-based CPUs. What has been the large-customer response since Mar, and how do you manage potential tension between product and IP?
A (CEO): Critical question. Before moving into silicon, top priority was ecosystem alignment — chipmakers like Samsung and TSMC, EDA partners like Synopsys and Cadence, the Linux/Kubernetes software communities, and our licensees — AWS, Microsoft, Google, NVIDIA, who also ship silicon. We engaged early to explain what we are doing and why it strengthens the Arm ecosystem — the more Arm-optimized software there is, the stronger each partner becomes. On the day of Arm Everywhere, 50+ partners all said yes publicly — some provided quotes, some introduced partners, some recorded videos; that is a powerful endorsement.
We are grateful and do not take it for granted. Fundamentally, we are doing this because customers asked; we are sold out and facing further demand, which speaks volumes. Q: Entering FY27, what is the pace of data center royalty growth?
A (CEO): Royalty from Neoverse-based customers has already doubled YoY, and — looking at Jason — I expect another doubling this year. The business is very strong. When I said in Feb that DC would be Arm’s largest business, I meant royalty alone — now with AGI CPU, we have two strong, independent, non-cannibalizing revenue engines. By FY31, we target $15bn from AGI CPU and $10bn from IP (with IP’s doubling driven largely by DC pricing).
Q: Should we think of license growth as high single-digit long term, or higher?
A (CFO): For this fiscal year, we guide license at ~20%, with a long-term target of high single-digit to low double-digit. The AI investment super-cycle has run for three years and could last at least another year; beyond that, I would set '10%+ YoY' as a floor based on what we can see today.
Q: For the $1bn silicon revenue across FY27+FY28 (roughly $90mn+ in FY27 Q4 and ~$910mn in FY28) with first-gen GPM ~30%+, what is the chip biz OpEx this year and next, and when does it add to EPS?
A (CFO): Your split is broadly right — about $90mn in FY27 Q4 and ~$910mn in FY28, consistent with what we disclosed 5–6 weeks ago. We do see higher demand, but until wafer/memory shortages ease, we assume those numbers. On OpEx, this year’s plan already includes customer support for the chip biz. The most expensive piece in chip development is the compute die, essentially the CSS — much of which we already do in the IP biz, enabling high reuse and better standalone profitability.
Incremental OpEx is small — teams are in the dozens, not hundreds. You can expect operating profit positive next year for this biz. By FY31, IP operating/EBITDA margin ~65% and silicon ~35%; at $15bn revenue, that is the likely steady-state, with timing driven by revenue ramp.
Q: Back to CPU/GPU mix in orchestration — should we think bottom-up as 'one agentic flow per core' or in terms of 'ARM instructions per token to orchestrate'?
A (CEO): That is a deep math problem. The latter is too complex; a more direct view is that each agent runs a batch or job with some branch prediction/coding complexity but is fundamentally an asynchronous workload — run, orchestrate, pause, wait. This maps well to single-core execution rather than synchronizing across multiple cores, saving power and boosting efficiency. More cores allow more concurrent batches; our view is 'more cores is better', and CPU core counts will keep rising.
The result is many more CPU cores even if chip counts do not triple — per-chip ASPs rise meaningfully. That is why the 5-year $100–125bn TAM is driven mainly by higher cores per chip lifting ASPs; think per-core, per-batch, not multi-core instruction spreading.
<End of transcript>
Risk disclosure and statements:Dolphin Research Disclaimer & General Disclosure
