UBS Global Technology and AI Conference: AI demand is "insatiable"! AMD declares that we are only in the "second year of a ten-year super cycle."

Wallstreetcn
2025.12.05 08:35
portai
I'm PortAI, I can summarize articles.

According to UBS meeting minutes, the AI industry is in the second year of a decade-long super cycle, with market demand being "insatiable and continuously growing." The core contradiction lies in supply bottlenecks rather than insufficient demand. Leading companies such as AMD and CoreWeave are aggressively expanding to meet future demand with hundreds of billions of dollars in orders and deep collaborations (such as AMD with OpenAI and Nebius with Microsoft). This indicates that the high prosperity of the industry chain will continue for several years

According to the latest minutes from UBS's Global Technology and AI Conference, leading companies believe that the AI industry is in the second year of a decade-long super cycle, with demand showing a "greedy and continuous growth" trend, while the bottleneck on the supply side is the current main contradiction, rather than insufficient demand.

According to the Chasing Wind Trading Desk, on December 4th, UBS's meeting minutes pointed out that for investors, this means that the strong growth in the AI field is far from over, with the entire industry chain from chip design, cloud computing infrastructure to data center construction entering a period of sustained high prosperity for several years. Leading manufacturers generally believe that there is no "bubble" in the market, but are working hard to meet long-term orders that have already been secured.

The views of key companies at this conference, including AMD, CoreWeave, and Nebius, particularly confirm this trend, showcasing astonishing growth expectations, cooperation orders worth billions or even tens of billions of dollars, and strong confidence in the continued explosive demand in the coming years.

Emerging cloud service providers (Neocloud) and infrastructure experts pointed out that due to the continuous expansion of model scale, the popularity of post-training applications, and the more intensive consumption of computing resources for inference tasks, AI demand is surging sharply.

The visibility of data center connectivity demand has extended from the traditional one quarter to 1-3 years, highlighting that the entire industry is preparing for long-term growth. The demand for traditional servers has also accelerated due to the rise of AI proxy workloads. Overall, the market consensus is that the current challenge is not finding demand, but how to address supply constraints from power, labor, to advanced components, in order to deliver computing power more quickly.

AMD: We are in the second year of a decade-long super cycle

AMD CEO Lisa Su expressed extreme optimism about the AI cycle at the conference. She clearly stated that the company does not believe there is a "bubble" currently, but views AI as "the second year of a ten-year computing super cycle."

Ambitious growth targets: AMD recently raised its compound annual growth rate (CAGR) expectation from over 50% in recent years to over 60%. The company expects its total addressable market (TAM) to reach $1 trillion by 2030 and plans to capture over 10% market share. In addition to AI GPUs, its revenue share in the server CPU market has also increased to 40% and continues to grow.

Significant validation from OpenAI collaboration: The multi-generational cooperation agreement of 6GW reached between AMD and OpenAI is a strong testament to the competitiveness of its products.** Lisa Su explained that each GW of computing power collaboration is equivalent to "hundreds of billions of dollars in sales," and this collaboration not only brings AMD large-scale orders but also deeply involves OpenAI in the success of AMD technology, thereby providing strong confidence backing for other AI-native customers and hyperscale cloud service providers.

Advancement of full-stack solutions: By acquiring ZT Company, AMD is building a complete system-level solution. The company plans to launch the MI450 rack in 2026, providing full-stack capabilities from chips to systems, and is committed to building an open ecosystem

Confidence in Supply: AMD emphasizes that the company has established strong partnerships with TSMC, memory, and packaging partners, and is confident in meeting strong demand and achieving growth targets.

CoreWeave: Demand is "Insatiable," the Challenge is Faster Delivery

As a representative of emerging AI cloud service providers, CoreWeave describes the current market demand as "insatiable" and "relentless."

  • Unstoppable Demand Growth: The company believes that the surge in demand is driven by three main factors: 1) The model scaling law continues to be effective; 2) A surge in compute-intensive post-training applications; 3) More compute-intensive inference models becoming mainstream. CoreWeave currently has a backlog of orders amounting to $55 billion.
  • Firm Denial of "Overbuilding": CoreWeave has explicitly refuted market concerns about overbuilding in AI infrastructure. The company emphasizes that they are building capacity committed by customers for five years, essentially "made to order," and are still striving to catch up with demand. A strong piece of evidence is that a cluster with over 10,000 H100 GPUs had its contract renewed by the customer at a price only 5% lower than a few years ago, demonstrating the value retention and continuous returns of high-quality computing power.
  • Dominance of the NVIDIA Ecosystem: Although the market anticipates diversification, CoreWeave admits that demand on its platform "overwhelmingly points to NVIDIA GPUs." If customer demand changes, the company will consider other options, but currently, all signals still point to a demand for more NVIDIA GPUs.
  • Robust Financing Model: CoreWeave funds its expansion through asset-level term loans, with 60% of its customer orders coming from investment-grade counterparties, greatly enhancing its financial robustness.

Nebius: We Are Not a GPU Rental Company, But a "Hyperscale Cloud Service Provider" for the AI Era

Nebius, spun off from Yandex, showcases its ambition to become a full-stack AI "hyperscale cloud service provider," rather than just offering GPU as a service.

Remarkable Demand Growth Rate: Nebius has observed that some customers' demand doubles every 6 to 8 weeks. Its business pipeline for Q3 2025 has accelerated growth by 70% quarter-over-quarter, creating $4 billion in new business. This strongly indicates that the industry has not overbuilt, and the growth rate of inference demand even exceeds that of capital expenditures.

Massive Partnership with Microsoft: Nebius has signed a five-year agreement worth up to $19 billion with Microsoft. This deal is primarily aimed at supporting Microsoft's CoPilot business and new model development, rather than competing with Azure's core cloud business, providing top-tier endorsement of Nebius's scale and technical strength.

Clear Path to Profitability: Nebius focuses not only on growth but also on profitability. The company expects long-term EBIT margins to reach 20-30% The actual implementation path includes: achieving a 20% cost saving through self-developed racks and other full-stack optimizations, providing high-margin cloud services, and leveraging its engineering team to achieve operational leverage. In terms of accounting treatment, the company adopts a 4-year depreciation for Hopper GPUs, which is more conservative than the 6 years used by its peers.

Flexible capacity expansion: Nebius has raised $8.5 billion to expand its data center scale. Its advantage lies in its flexible project management capabilities, allowing it to advance multiple data center projects in parallel and effectively address the bottleneck issue of delivering AI computing power