--- title: "Google Seeks Partnership with Marvell to Develop AI Inference Chips, Accelerating Shift Away from Broadcom" type: "News" locale: "en" url: "https://longbridge.com/en/news/283274497.md" description: "Google is in talks with Marvell to jointly develop two custom AI inference chips: a memory processing unit (MPU) designed to work alongside TPUs and a new TPU specialized for inference scenarios, with a planned production volume of nearly 2 million units. This move represents Google's latest step in its systematic effort to reduce reliance on Broadcom, while NVIDIA's March launch of the LPU has further accelerated Google's strategic timeline. As demand for inference computing power surges in the era of AI agents, this chip race is quietly picking up speed" datetime: "2026-04-20T00:57:16.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/283274497.md) - [en](https://longbridge.com/en/news/283274497.md) - [zh-HK](https://longbridge.com/zh-HK/news/283274497.md) --- # Google Seeks Partnership with Marvell to Develop AI Inference Chips, Accelerating Shift Away from Broadcom Google is seeking a partnership with chip designer Marvell Technology to develop two new chips tailored specifically for AI inference workloads. This move marks Google's latest effort to systematically reduce its long-term dependence on partner Broadcom, reflecting the rapidly heating demand for inference chips across the AI industry. According to a report by The Information on the 19th, sources familiar with the matter revealed that **the negotiations between Google and Marvell involve two chips: one is a memory processing unit (MPU) designed to operate alongside Google's Tensor Processing Units (TPUs), and the other is a new TPU specifically engineered for running AI models.** Unlike Google's previous purchases of off-the-shelf chips from Marvell, this collaboration aims to create bespoke semiconductor products customized exclusively for Google. These negotiations will have a direct impact on the chip market landscape. Broadcom's stock faces potential pressure—despite signing a new agreement with Google earlier this month extending through 2031—yet Google's strategic intent to diversify its supplier base is becoming increasingly clear. Meanwhile, Marvell stands to expand its custom chip business portfolio further, which is already its fastest-growing segment. ## Two New Chips: Clear Division of Labor Targeting Inference Efficiency According to two sources familiar with the matter, the memory processing unit developed through the Google-Marvell partnership will work in tandem with existing TPUs, dynamically allocating AI workloads between the two chip types based on differences in computational and memory requirements. This design philosophy stems from the inherent heterogeneity of inference tasks—certain steps in generating responses require extremely high computing power, while others are constrained by the read/write speed of chip memory, making it difficult for a single processor to handle both effectively. Google plans to produce nearly 2 million memory processing units, though sources note that since negotiations are still in early stages, this figure remains subject to change. **As a reference, Morgan Stanley estimates Google's TPU production volume for 2027 at approximately 6 million units.** Both parties aim to finalize the design of the memory processing unit as soon as next year, followed by the handover to pilot production. The timeline for completing the design of the second chip—a new TPU built specifically for inference scenarios—and its planned production volume remain unclear. While Google currently manufactures chips via TSMC, it is yet to be confirmed whether the new chips will continue to be fabricated by TSMC. ## Inference Chip Arms Race Accelerates; NVIDIA's LPU Launch Acts as Catalyst Google's acceleration of this cooperation is partly driven by competitive pressure from NVIDIA. According to a Google employee, the company had previously harbored plans to develop inference chips. However, following NVIDIA's unveiling of the Language Processing Unit (LPU) at its GTC conference earlier this year, Google immediately sped up its related efforts. NVIDIA's LPU is built upon technology licensed from startup Groq for $20 billion, with Marvell serving as the chip design partner for Groq's first-generation LPU—meaning Marvell already possesses practical experience in designing inference chips. The surge in demand for inference chips fundamentally stems from the evolution of AI product forms. As more complex AI applications such as autonomous agents come to fruition, they require far greater computing power than traditional chatbots. OpenAI recently signed a procurement agreement exceeding $20 billion for inference chips with Cerebras and is also collaborating with Broadcom to develop its own inference chips, indicating that the entire industry is accelerating its layout in this sector. ## Reducing Dependence on Broadcom: Clear Strategic Intent, Yet Progress Remains Constrained Google's pursuit of a partnership with Marvell is part of its ongoing strategy to diversify suppliers, an initiative it has advanced since 2023. **Broadcom has long served as Google's sole design partner for TPUs, collecting licensing fees per TPU produced. With the explosive growth in TPU demand, the fees Google pays Broadcom have risen accordingly, forming the core motivation behind Google's search for alternatives.** Google introduced MediaTek last year to participate in the design and production of TPU chips. This negotiation with Marvell further broadens Google's partner matrix. Previously, Google had purchased CXL controller chips from Marvell to manage memory sharing among servers within data centers; this prior collaboration established a foundation of mutual trust between the two companies. However, Broadcom's core position remains difficult to displace in the short term. Earlier this month, Broadcom signed a new agreement with Google to provide custom TPUs and networking components for Google's next-generation AI data center racks, extending their cooperation through 2031. This indicates that Google's diversification strategy is more about adding new options to the existing landscape rather than making a complete switch. For Marvell, a deep partnership with Google means its custom chip business could gain endorsement from another heavyweight client. Marvell's core businesses cover standard data center networking, storage, and optical interconnect chips, but its custom business, which assists clients in designing proprietary chips, has become its fastest-growing segment in recent years. The commercialization trajectory of Google's TPU also provides broader market imagination space for this cooperation. Starting last year, Google began leasing TPUs to customers outside its own data centers, directly challenging NVIDIA's dominant position in the AI chip market. Anthropic, Meta, and Apple have all become TPU customers. If the development of the new inference chips proceeds smoothly, their potential market will not be limited to internal demand within Google. ### Related Stocks - [MRVL.US](https://longbridge.com/en/quote/MRVL.US.md) - [GOOG.US](https://longbridge.com/en/quote/GOOG.US.md) - [GOOGL.US](https://longbridge.com/en/quote/GOOGL.US.md) - [MVLL.US](https://longbridge.com/en/quote/MVLL.US.md) - [GOOW.US](https://longbridge.com/en/quote/GOOW.US.md) - [GGLL.US](https://longbridge.com/en/quote/GGLL.US.md) - [AVGO.US](https://longbridge.com/en/quote/AVGO.US.md) - [MS.US](https://longbridge.com/en/quote/MS.US.md) - [600795.CN](https://longbridge.com/en/quote/600795.CN.md) - [002128.CN](https://longbridge.com/en/quote/002128.CN.md) - [002300.CN](https://longbridge.com/en/quote/002300.CN.md) - [002330.CN](https://longbridge.com/en/quote/002330.CN.md) - [TSM.US](https://longbridge.com/en/quote/TSM.US.md) - [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md) - [PROQ.US](https://longbridge.com/en/quote/PROQ.US.md) - [IROQ.US](https://longbridge.com/en/quote/IROQ.US.md) - [GROM.US](https://longbridge.com/en/quote/GROM.US.md) - [NA.US](https://longbridge.com/en/quote/NA.US.md) - [OpenAI.NA](https://longbridge.com/en/quote/OpenAI.NA.md) - [CBLL.US](https://longbridge.com/en/quote/CBLL.US.md) - [SST.US](https://longbridge.com/en/quote/SST.US.md) - [CRUS.US](https://longbridge.com/en/quote/CRUS.US.md) - [MTK.NA](https://longbridge.com/en/quote/MTK.NA.md) - [002394.CN](https://longbridge.com/en/quote/002394.CN.md) - [002250.CN](https://longbridge.com/en/quote/002250.CN.md) - [002454.CN](https://longbridge.com/en/quote/002454.CN.md) - [AXHU.US](https://longbridge.com/en/quote/AXHU.US.md) - [META.US](https://longbridge.com/en/quote/META.US.md) - [AAPL.US](https://longbridge.com/en/quote/AAPL.US.md) - [MS-O.US](https://longbridge.com/en/quote/MS-O.US.md) - [MS-Q.US](https://longbridge.com/en/quote/MS-Q.US.md) - [MS-E.US](https://longbridge.com/en/quote/MS-E.US.md) - [MS-I.US](https://longbridge.com/en/quote/MS-I.US.md) - [MS-L.US](https://longbridge.com/en/quote/MS-L.US.md) - [MS-P.US](https://longbridge.com/en/quote/MS-P.US.md) - [MS-A.US](https://longbridge.com/en/quote/MS-A.US.md) - [MS-F.US](https://longbridge.com/en/quote/MS-F.US.md) - [MS-K.US](https://longbridge.com/en/quote/MS-K.US.md) - [NVD.DE](https://longbridge.com/en/quote/NVD.DE.md) - [SSTPW.US](https://longbridge.com/en/quote/SSTPW.US.md) - [SST+.US](https://longbridge.com/en/quote/SST+.US.md) ## Related News & Research - [Google brings its Gemini Personal Intelligence feature to India](https://longbridge.com/en/news/282713461.md) - [Google now lets you explore the web side-by-side with AI Mode](https://longbridge.com/en/news/283035195.md) - [Cadence and Google Collaborate to Scale AI-Driven Chip Design with ChipStack AI Super Agent on Google Cloud | CDNS Stock News](https://longbridge.com/en/news/282880370.md) - [Google DeepMind Releases New AI Model to Bring Robots Closer to Real Autonomy](https://longbridge.com/en/news/282728878.md) - [Google in talks with Marvell to build new AI chips for inference, The Information reports](https://longbridge.com/en/news/283256162.md)