
MSFT (Trans): Agents will be the apps of the new era
Below is Dolphin Research's FY26 Q2 earnings call Trans for $Microsoft(MSFT.US). For our earnings analysis, see 'Mature Microsoft: Crouching for a higher jump?'.
I. Key financials recap
Overall results: revenue of $81.3bn, +17% YoY (+15% cc). OP grew 21% YoY (+19% cc); EPS was $4.14, with Adj. EPS up 24% YoY (+21% cc).
Microsoft Cloud: revenue topped $50bn for the first time at $51.5bn, +26% YoY (+24% cc). GPM was 67%.
Commercial bookings and RPO: commercial bookings rose 230% YoY (+228% cc), driven by large multi‑year commitments including OpenAI. Commercial RPO reached $625bn, +110% YoY; approx. 25% will be recognized over the next 12 months (+39% YoY). Roughly 45% of the commercial RPO balance relates to OpenAI.
Segment revenue:
- Productivity & Biz. Processes: revenue of $34.1bn, +16% YoY (+14% cc).
- Intelligent Cloud: revenue of $32.9bn, +29% YoY (+28% cc). Azure and other cloud services revenue grew 39% YoY (+38% cc).
- More Personal Computing: revenue of $14.3bn, -3% YoY.
Margins and Capex: company GPM was 68%, down slightly YoY on continued AI infra investment and higher AI product usage. Capex was $37.5bn, with roughly two‑thirds in short‑lived assets such as GPUs and CPUs; OCF rose 60% YoY to $35.8bn.
Shareholder returns: returned $12.7bn to shareholders via dividends and buybacks. Up 32% YoY.
II. Earnings call details
Management highlights
1) AI strategy and platform progress:
Overall strategy: focus on three layers of the stack — cloud and token factory, the agent platform, and great agent experiences. AI diffusion is just starting to impact GDP and expand TAM.
Cloud and token factory:
Long‑term competitiveness hinges on building infra for new hyperscale workloads. Infra must reflect workload heterogeneity and distribution to meet needs across geographies and segments, including the long tail.
The key optimization metric is tokens per watt per dollar, ultimately improving utilization and lowering TCO via advances in chips, systems and software. This is how we maximize output per unit of power and spend.
First‑party silicon: we introduced the Maia 200 accelerator delivering 10+ petaflops at FP4 precision, with TCO >30% lower than the latest hardware, to be used for inference and synthetic data generation. On CPUs, Cobalt 200 delivers >50% performance uplift vs. our first custom processor.
Agent platform:
This is the new application platform; agents are the new apps. To build, deploy and manage agents, customers need a model catalog, fine‑tuning, orchestration, context engineering, AI safety, governance, observability and guardrails.
Cloud providers must offer broad model choice. Customers expect to use multiple models within the same workload, with tuning and optimization based on cost, latency and performance. Choice and optimization across models are essential.
To make agents effective, they must be grounded in enterprise data and knowledge. That requires connecting agents to systems of record and operational and analytical data. It also means tapping semi‑ and unstructured productivity and communications data.
Consumer Copilot: experiences span chat, search, creation and shopping. DAUs for Copilot apps nearly tripled YoY, and we now support in‑app purchases with PayPal, Shopify and Stripe.
Microsoft 365 Copilot: focused on org‑level productivity, Work IQ creates a stateful assistant grounded in Microsoft 365 data. Conversations per user doubled YoY, and DAUs rose 10x YoY. Net adds of paid seats hit a record, up over 160% YoY, with 15mn paid seats today.
Customers with 35,000+ seats tripled YoY. Adoption is accelerating across large enterprises.
Dynamics 365: winning share via built‑in copilots. Examples: Visa uses a customer knowledge assistant to turn conversation data into insights; Sandvik uses a sales qualification copilot to automate lead screening.
GitHub Copilot: strong growth across all paid SKUs. Copilot Pro Plus individual subscriptions rose 77% QoQ; total paid users reached 4.7mn, +75% YoY. We launched GitHub Agent HQ to unify multi‑vendor coding agents.
Security: added 10+ security copilots for Defender, Intune and more. Rolling out Security Copilot to all E5 customers; Purview reviewed 2.4bn Copilot interactions this quarter, up 9x YoY.
2) Outlook:
Q3 outlook (USD basis):
Overall: revenue of $80.65–$81.75bn (+15%–17% YoY). COGS of $26.65–$26.85bn (+22% YoY); Opex of $17.8–$17.9bn (+10%–11% YoY).
Capex is expected to decline QoQ, reflecting normal cloud build cadence and finance‑lease delivery timing. Short‑lived assets should remain a similar mix of Capex as in Q2.
Segment guidance:
Productivity & Biz. Processes: revenue of $34.25–$34.55bn (+14%–15% YoY). M365 commercial cloud to grow 13%–14% cc, with ARPU uplift from Copilot and E5; Dynamics 365 growth expected to exceed 15%.
Intelligent Cloud: revenue of $34.1–$34.4bn (+27%–29% YoY). Azure revenue growth expected at 37%–38% cc; demand continues to outstrip supply, requiring a balance between allocable new capacity and other priorities.
More Personal Computing: revenue of $12.3–$12.8bn. Windows OEM revenue to decline ~10%, partly on memory price increases weighing on PC demand. Search & News ad revenue to grow high single digits; Xbox content and services to decline mid‑single digits YoY.
Full‑year and beyond: FY26 OPM is expected to tick higher, benefiting from prior investments and a mix shift toward Windows OEM and other businesses. Memory price inflation may affect Windows OEM and on‑prem server markets and also Capex. The impact on GPM will be more gradual given six‑year depreciation on equipment.
Q&A
Q: Capex is running ahead of expectations while Azure growth is a touch slower, raising ROI concerns. How should we think about capacity additions vs. Azure growth and the ROI of these investments?
A: Azure growth guidance is better viewed as guidance on capacity allocated to Azure. Our Capex, especially GPUs/CPUs, is set against long‑term demand. We first meet sales growth and acceleration for first‑party apps like M365 Copilot and GitHub Copilot; then we invest in R&D and product innovation, allocating GPUs to AI talent to accelerate products. Only the remaining capacity goes to Azure demand.
If all newly installed GPUs were allocated to Azure, revenue growth would exceed 40%. The key is that we invest so every layer of the stack benefits customers, with revenue showing up across the biz and OpEx growing as we invest in talent.
Q: Server life is six years, while average RPO term is only 2.5 years (vs. two last quarter). How can investors be confident AI‑centric Capex will be monetized over the six‑year hardware life to deliver steady revenue and margin growth?
A: The average term reflects the mix of contracts. Shorter‑term contracts in businesses like M365 (e.g., three years) pull down the average. The rest is largely longer‑duration Azure contracts, which extended from about two years to 2.5 years this quarter.
For most of the capital we are deploying and GPUs we are buying, the majority of useful life is already covered by contracts; hence the cited risk does not exist. Looking only at Azure, RPO terms are even longer. For GPU deals, including some of our largest customers, commitments cover the entire useful life of the GPUs, so we do not see this risk.
We also continuously optimize the fleet in software, including older generations, and refresh annually per Moore's Law while orchestrating globally in software. Delivery efficiency improves through the hardware life, and margins tend to expand over time — as seen consistently with the CPU fleet.
Q: With ~45% of RPO tied to OpenAI, can you comment on sustainability and potential risks?
A: We highlighted that number precisely because the remaining 55% (~$350bn) relates to our broad portfolio across solutions, Azure, industries and geographies. It is a very large and more diversified RPO than most peers, and we have high conviction in it. That portion alone grew 28%, showing broad‑based momentum across customer segments, industries and regions.
As for OpenAI, it is a great partnership. We remain their scale provider and are excited to support one of the most successful businesses, which keeps us at the forefront of technical and application innovation.
Q: Can you qualify the scale of capacity adds? Last quarter's +1 GW was significant and appears to be accelerating. Investors are focused on Atlanta and the Fairwater project in Wisconsin — any color on the magnitude over coming quarters, regardless of allocation?
A: We are doing everything we can to add capacity as fast as possible. Specific locations like Atlanta and Wisconsin are multi‑year delivery programs, so it is not about any one site. The core is expanding globally, with most in the U.S. (including those two) and in other regions to meet customer demand and rising usage.
We will keep building durable infra — securing power, land and facilities — and deploy GPUs/CPUs as soon as sites are ready. In parallel, we push construction and operational efficiency to drive the highest possible utilization. This is not about two sites; they are on multi‑year timelines, and the imperative is to complete work quickly across all active and upcoming locations.
Q: Maia 200's inference performance looks compelling vs. existing TPU, Trainium and Blackwell. How do you view this achievement and the extent chips are becoming a core differentiator for Microsoft? What does it imply for inference cost and margin outlook?
A: We have a long history in first‑party silicon, and we are seeing strong performance running GPT‑5.2. This shows that when new workloads emerge, you can innovate end‑to‑end across model, silicon and the full system — not just the die, but rack‑level networking and memory tuned to the workload. We work closely with our super‑intelligence team, and all our models will be optimized for Maia.
Overall, this is still very early and innovation is rapid. Low‑latency inference is top of mind for everyone. We will not lock into any single technology; we maintain strong partnerships with NVIDIA and AMD and continue to innovate together.
Our goal is the best TCO for the fleet at any point in time. This is not a single‑generation game; you must stay ahead and incorporate substantial external innovation into the fleet to achieve structural TCO advantages. We are excited about Maia, Cobalt, our DPUs and NICs, and our system capabilities enable vertical integration.
But vertical integration does not mean we only do vertical integration. We want to stay flexible, and that is exactly what you are seeing.
Q: Can you speak to the momentum of enterprises going 'frontier'? We see customers achieving step‑function gains adopting your AI stack. As they become frontier firms, how much can spending with Microsoft expand?
A: We see adoption across our three suites — M365, Security and GitHub — with a compounding effect. Work IQ matters because, for any company using Microsoft, the most critical database is the underlying Microsoft 365 graph of tacit information: people, relationships, projects, outcomes and communications. That asset is central to every biz. process and workflow.
Now the agent platform is truly transforming companies. Deploying agents coordinates work and amplifies impact. Firms are also using services in Fabric and Foundry plus GitHub or low‑code tools to build their own agents across customer service, marketing, finance and more.
The most exciting development is the convergence of M365 Copilot, GitHub Copilot and Security Copilot into new agent systems. They compound the benefits of data and deployments and may be the most transformative impact we are seeing today.
Q: How is Azure performing on the CPU side, given operational changes? More broadly, are customers realizing that to do AI right, they need to be in the cloud, and how does that drive cloud migration?
A: First, AI workloads should not be viewed only as GPU liabilities. Any agent invokes tools running in other containers, and those containers require general‑purpose compute. We plan the fleet with a mix of AI and general compute capacity; even training needs significant general compute and adjacent storage.
Inference is similar — the agent pattern inherently requires general compute for the agent. It may not need a GPU, but it needs compute and storage. That is the new paradigm.
Cloud migration continues. The latest SQL Server as IaaS on Azure keeps growing, which is why we must think about our commercial cloud balanced with the AI cloud — when customers migrate or build new workloads, they need all these infra elements available in the regions they deploy.
<End of text>
Risk disclosure and statement: Dolphin Research disclaimer and general disclosure
