--- title: "The wave of intelligent agents is coming, and CPUs are 迎来 a \"Renaissance moment\"! Intel, AMD, and ARM stocks are soaring together" type: "News" locale: "en" url: "https://longbridge.com/en/news/284021383.md" description: "On Friday, at the beginning of the U.S. stock market, Intel and AMD stock prices hit all-time highs, rising over 27% and 14% respectively. With the rise of AI entities, the market demand for CPUs has surged, and the AI computing architecture is shifting from GPU-centric to CPU-centric data centers. The ARM architecture is also favored by investors, demonstrating advantages in high energy efficiency and low power consumption. The value chain of AI computing infrastructure is systematically spreading, and future excess returns will no longer be limited to the GPU field" datetime: "2026-04-24T14:31:02.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/284021383.md) - [en](https://longbridge.com/en/news/284021383.md) - [zh-HK](https://longbridge.com/zh-HK/news/284021383.md) --- # The wave of intelligent agents is coming, and CPUs are 迎来 a "Renaissance moment"! Intel, AMD, and ARM stocks are soaring together According to Zhitong Finance APP, at the beginning of trading on Friday, the two major x86 architecture CPU giants—Intel (INTC.US) and AMD (AMD.US)—saw their stock prices reach historic highs. Intel, the traditional American chip giant of the x86 architecture, surged over 27% at one point, supported by strong and better-than-expected performance. Meanwhile, AMD, the industry leader in high-performance x86 architecture AI data center server CPUs, also showed impressive gains, with its stock price jumping over 14% at the open to set a new historical high. Arm Holdings Plc (ARM.US), the owner of the ARM instruction set architecture, also experienced a significant rise, with its opening stock price hitting a historic high, highlighting the strong investor interest in the ARM architecture, which boasts great advantages in energy efficiency and low power consumption. With the heavy launch of Claude Cowork by Anthropic and super AI agent tools like OpenClaw that can autonomously execute tasks expected to explode in 2026, the wave of AI agents is rapidly sweeping the globe. The bottleneck in AI computing power architecture is shifting from GPUs, which focus on matrix multiplication throughput, to data center CPUs centered on control flow, task orchestration, and memory/IO coordination, leading to a severe supply-demand imbalance for high-performance CPUs aimed at large-scale AI data centers. For the past two years, the AI narrative has been almost monopolized by GPUs, with CPUs seemingly playing a "supporting role" in the AI arms race. However, as agent-based AI workflows (i.e., AI agents) like the open-source OpenClaw dominate inference workloads, data orchestration, task scheduling, memory access, network communication, and multi-tool invocation, the market has come to realize that without a powerful CPU as the system's core, GPU clusters cannot operate efficiently. This essentially marks the return of CPUs from being "undervalued infrastructure" to the core stage of AI data centers, reminiscent of a "Renaissance" retro wave. Entering the era of AI agents, the computing power system is shifting from merely stacking GPUs to more complex heterogeneous computing: CPUs must handle large-scale task scheduling, data movement, memory management, model invocation, toolchain orchestration, inference request distribution, database retrieval, network communication, and security isolation. In other words, CPUs are no longer just "background components" in AI data centers; they have re-emerged as the system's core and scheduling brain of the AI factory. This corresponds perfectly to the core imagery of the "Renaissance": a traditional computing power architecture that was once undervalued and overshadowed by the GPU halo is regaining its value in the era and pricing power in the capital market. The rapid progress in AI data center construction has driven Intel's data center CPUs into a state of supply-demand imbalance, with some of Intel's most in-demand high-performance server CPUs having delivery times extended to as long as six months. The prices of these high-performance server-level CPUs aimed at data centers have generally risen by 10% this year. This is also why Intel, a chip manufacturer whose stock price had been sluggish for a year and a half, has seen its stock price soar over 120% this year, setting a new historical high **The flames of war in the Middle East cannot suppress the narrative of the "AI bull market"! GPUs no longer monopolize the computing power theme as the wave of intelligent agents ignites CPUs** Morgan Stanley, Stifel, DA Davidson, and other Wall Street financial giants believe that the two major PC and data center CPU giants—Intel (INTC.US) and AMD (AMD.US)—are in the most advantageous core position to benefit from the record-level explosion in data center CPU demand. Additionally, top analysts on Wall Street believe that storage chip giants will also benefit from the exponential expansion of CPU demand, with Morgan Stanley asserting that major U.S. storage manufacturers Micron (MU.US) and SanDisk (SNDK.US) are also well-positioned. As the KOSPI Composite Index, heavily weighted by Samsung and SK Hynix, reached a historic high under the pressure of deteriorating geopolitical situations, and with TSMC, known as the "king of chip foundries," driving the Taiwanese stock market to a historic high, investors are increasingly convinced that the "AI computing power investment theme" can overshadow all market noise, especially as the Philadelphia Semiconductor Index, dubbed the "barometer of chip stocks," recorded a historic 17 consecutive gains. ![1777040992(1).png](https://imageproxy.pbkrs.com/https://img.zhitongcaijing.com/image/20260424/1777041009230128.png?x-oss-process=image/auto-orient,1/interlace,1/resize,w_1440,h_1440/quality,q_95/format,jpg) At the same time, the weight distribution around the value chain of AI computing power infrastructure is beginning to shift, with the next round of excess alpha returns no longer belonging solely to the strongest leaders in the AI GPU/AI ASIC fields, but systematically spreading to CPUs, storage, PCBs, liquid cooling systems, ABF substrates, and extensive wafer foundry across the entire stack of AI computing power infrastructure. In this narrative shift, Wall Street financial giants like Morgan Stanley believe that data center-oriented CPUs and DRAM/NAND storage chips may be the most core beneficiaries in the AI computing power subcategories. In the intelligent agent link, a large number of workloads are not only consumed in token generation on GPUs but also in CPU-dominated processes such as Python interpretation, web scraping, database retrieval, RAG index access, lexical processing, task queue scheduling, RPC/IPC communication, and KV state updates. This means that what determines user experience is increasingly not the peak computing power of a single GPU, but whether the CPU has sufficient core count, thread concurrency, cache hierarchy, memory bandwidth, and PCIe/CXL/interconnect scheduling capabilities to support high-frequency tool calls and high-density task switching. Once the CPU cores, memory subsystems, or I/O scheduling are insufficient, even if the GPU has nominal computing power, it will experience utilization collapse due to data preparation, task coordination, and system waiting. Therefore, it is undeniable that the bottleneck of AI computing power architecture is shifting from GPUs, which are centered on matrix multiplication throughput, to data center CPUs, which focus on control flow, task orchestration, and memory/IO coordination. The root of this change lies in the essential migration of workload paradigms CPUs are no longer just general-purpose computing chips; they have become the control plane processors, system orchestration engines, and resource scheduling hubs of the intelligent agent era. The statement that "the underestimated CPU has become the new bottleneck for AI" is not an emotional judgment but an inevitable result of AI workloads evolving from "inference computation problems" to "complex system engineering problems." In the early stages, large model inference primarily focused on "single request - single generation," where CPUs mainly handled data transportation, request routing, and basic scheduling, serving as typical auxiliary control planes. However, with the advent of AI agents and reinforcement learning, system loads are no longer limited to single forward inference but have evolved into complex closed loops that include task planning, tool invocation, sub-agent collaboration, environmental interaction, state management, and result verification. The aforementioned "orchestration layer" is essentially a CPU-intensive task characterized by strong control flow, strong branching judgment, strong system calls, and strong memory access, which cannot be efficiently replaced by GPUs. Therefore, CPUs are transitioning from being "supporting roles" to becoming the new bottleneck that determines system throughput, latency, and resource utilization. ![1777040886(1).png](https://imageproxy.pbkrs.com/https://img.zhitongcaijing.com/image/20260424/1777040897200760.png?x-oss-process=image/auto-orient,1/interlace,1/resize,w_1440,h_1440/quality,q_95/format,jpg) Morgan Stanley's latest forecast indicates that the explosion of intelligent agents marks a structural shift from computation to orchestration, leading to an estimated additional market space of $32.5 billion to $60 billion for CPUs by 2030, significantly expanding the total TAM for server-grade CPUs to a range of $82.5 billion to $110 billion. A forecast report from TrendForce suggests that in the era of AI agents, the CPU:GPU ratio may be re-evaluated from the traditional AI data center ratio of 1:4 to 1:8, to a range of 1:1 to 1:2. **Wall Street cheers that the rise of AMD and ARM is not over** As of the time of writing, Intel's stock price hovers around $85, with an intraday increase of over 27%, surpassing the optimistic target prices of the majority of Wall Street analysts. However, AMD and ARM still have some distance to go to reach Wall Street's highest target prices. In a recent investor report led by Wall Street veteran strategist Joseph Moore, Morgan Stanley analysts stated: "The obvious beneficiaries of the strengthening CPU market—Intel and AMD—have somewhat complex strategic frameworks, but the exponential expansion of server CPU demand is crucial for the profit outlook of both." "Between the two, we prefer AMD; additionally, at this point in time, we believe that memory chip manufacturers have a significantly better risk-reward ratio, and the memory theme can be considered one of the direct beneficiaries of the expansion in CPU demand," the Morgan Stanley analysts led by Joseph Moore stated The D.A. Davidson team led by veteran Wall Street analyst Gil Luria has chosen to upgrade the stock rating of AMD (AMD.US) before the U.S. stock market opens on Friday, following Intel's strong earnings report, and has significantly raised the 12-month target price to $375—ranking it as the highest target price on Wall Street. As of the time of publication, AMD's stock price surged 14% to around $348. "We are upgrading AMD's stock rating from Neutral to Buy and raising the target price from $220 to $375, based on structural growth in CPU demand, while AMD's visibility in this great data center construction wave has significantly improved. We believe that given the extent to which Intel's performance exceeded expectations, there is significant upside potential for AMD's earnings expectations, which will start to be reflected in AMD's March quarter results scheduled for May 5," stated the D.A. Davidson team led by Gil Luria. "We believe that Intel's performance is a prelude to a significant leap in AMD's CPU business, and we are confident that the structural shift towards agentic AI workloads is creating unprecedented demand for server CPUs. We believe that, given our judgment that demand will exceed supply in the foreseeable future, AMD is in a favorable position to significantly raise prices across its entire product portfolio to support and expand profit margins," the D.A. Davidson team led by Gil Luria added. Currently, the bullish logic on Wall Street regarding ARM has shifted from being viewed as a "smartphone IP licensing company" to one of the core beneficiaries of the super wave of AI data center CPUs and agentic AI infrastructure. In terms of the highest target price, the well-known investment firm Guggenheim recently raised ARM's target stock price to the highest level on Wall Street at $240, with the bullish rationale being that ARM is transitioning from a traditional IP licensor for smartphones and lightweight consumer electronics to a direct participant in AI data center silicon and supercomputing platforms. According to a recent statement released on Friday, U.S. cloud computing and e-commerce giant Amazon (AMZN.US) and Facebook's parent company Meta Platforms Inc. (META.US) have reached a multi-billion dollar long-term agreement, where the social media giant will lease hundreds of thousands of Amazon's self-developed ARM architecture general-purpose data center server CPU chips for its large-scale new AI data centers to meet the massive artificial intelligence inference workloads of users on Facebook and Instagram. Graviton is the ARM architecture general-purpose server CPU developed by Amazon's AWS cloud computing division, primarily responsible for general computing, scheduling, data preprocessing/postprocessing, service orchestration, and some AI inference-related scheduling and coordination tasks in AI data centers. For a company like Meta, which processes massive amounts of AI agents, recommendations, advertisements, content generation, and query responses daily, many tasks do not require expensive GPUs to be involved throughout; utilizing high-density ARM architecture like Graviton instead of Intel x86 architecture CPUs for inference service peripheral loads can reduce the cost per request Releasing GPUs for higher-value training/inference tasks and improving overall cluster TCO. Arm also emphasizes that the expansion of AI data centers is making orchestration, data processing, and system control on the low-power, high-efficiency ARM architecture CPU side key bottlenecks, while AWS's fifth-generation Graviton increases the core count to 192 cores, reflecting the rising demand for CPU density. Arm can be considered one of the biggest winners in the global AI frenzy. NVIDIA's self-developed Grace CPU is based on the ARM architecture, and Amazon's self-developed data center Graviton server processor also adopts the ARM architecture. Similarly, there are Google's first-generation self-developed ARM architecture data center CPU, the Google Axion Processors, built on ARM Neoverse, and Microsoft's Azure Cobalt 100 self-developed ARM architecture data center CPU. The ARM architecture is evolving from the "king of smartphones" to one of the foundational infrastructures for computing power in the AI cloud era. The reduced instruction set computing architecture adopted by ARM gives server CPUs designed based on it a significant advantage in energy efficiency and low power consumption when executing AI inference/training tasks compared to Intel's x86 architecture. This characteristic makes the ARM architecture particularly suitable for the data center server field, efficiently working with AI GPUs to meet the nearly endless demand for AI inference/training computing power ### Related Stocks - [AMD.US](https://longbridge.com/en/quote/AMD.US.md) - [ARM.US](https://longbridge.com/en/quote/ARM.US.md) - [INTC.US](https://longbridge.com/en/quote/INTC.US.md) - [AMDL.US](https://longbridge.com/en/quote/AMDL.US.md) ## Related News & Research - [Is Arm Stock a Buy at New All-Time Highs?](https://longbridge.com/en/news/283884320.md) - [IPO Weekly Weigh-in: Cerebras vs. major AI chipmakers?](https://longbridge.com/en/news/283667424.md) - [Analysts See AMD Poised To Capture More AI Data Center Demand](https://longbridge.com/en/news/284066308.md) - [BUZZ-Street View: AI demand boosts Intel, but capacity limits loom](https://longbridge.com/en/news/283982891.md) - [AMD Stock Just Hit Yet Another All-Time High. It Can Thank Intel for Making the CPU Relevant Again.](https://longbridge.com/en/news/284025458.md)