---
title: "Deeply Bound! Google and Intel Expand AI Chip Partnership, Committing to Xeon Processors"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/282205752.md"
description: "Google is expanding its partnership with Intel to bring Xeon 6 into AI training and inference, signaling the resurgence of CPUs in AI infrastructure to address computational bottlenecks in the agent era. This move offers Intel a new opportunity to penetrate the NVIDIA-dominated landscape, bolstered by U.S. government investment, industrial capital support, and advanced process nodes, leading to a recovery in market confidence. While strengthening this collaboration, Google remains committed to its TPU and in-house CPU strategies, with its diversified layout accelerating competition in AI compute"
datetime: "2026-04-09T13:32:11.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/282205752.md)
  - [en](https://longbridge.com/en/news/282205752.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/282205752.md)
---

# Deeply Bound! Google and Intel Expand AI Chip Partnership, Committing to Xeon Processors

Google has pledged to adopt multiple generations of Intel processors in its artificial intelligence data centers, marking a significant upgrade to the existing partnership and reflecting a new phase in the AI infrastructure race—where the strategic value of CPUs is being re-evaluated.

According to a statement released by both companies on Thursday, **Intel's latest-generation Xeon 6 processors will handle Google's AI training and inference workloads**, providing the chip giant—which has long been on the defensive in the AI chip market—with a new opportunity to compete for market share in the NVIDIA-dominated landscape. Amin Vahdat, Google's CTO of AI Infrastructure, stated, "Intel's Xeon roadmap gives us the confidence to continue meeting the growing performance and efficiency demands of our workloads." Neither party disclosed financial terms or a timeline for the agreement.

This collaboration comes as **CPUs return to the spotlight in the next phase of the AI race.** Dion Harris, NVIDIA's head of AI infrastructure, told CNBC in March that as agent workloads push compute demands beyond graphics processors, the CPU is "becoming the bottleneck." For Intel, this shift could provide a strategic window to reverse its fortunes.

## CPUs Return to the Core of AI Infrastructure

The evolution of AI workloads is reshaping the chip demand structure in data centers. NVIDIA's Dion Harris pointed out that accelerated computing architectures centered around GPUs face bottlenecks when handling emerging agent workloads, making balanced system-level configuration increasingly critical. Intel CEO Lip-Bu Tan also emphasized in the statement, "Scaling AI requires not only accelerators but also balanced systems."

Google and Intel's relationship dates back nearly thirty years to when Google first built its server racks, indicating a long history of collaboration. This latest agreement, formally incorporating Intel's Xeon 6 processors into Google's AI workload ecosystem, represents a substantial deepening of this long-standing relationship.

## Intel Rebuilds Market Confidence with Multiple Tailwinds

Intel has been making a series of strategic moves to redefine its position in the AI era. In August last year, the U.S. government purchased a 10% stake in Intel, a move the Trump administration framed as a strategic initiative to support domestic advanced chip manufacturing capabilities. Subsequently, NVIDIA announced a $5 billion investment in Intel. Boosted by these investments, Intel's stock price has nearly tripled over the past year.

On the manufacturing front, Intel's latest Xeon processors utilize its most advanced 18A process technology and are produced at its wafer fabrication plant in Arizona, which opened last year. Despite Intel's massive investment in its foundry business, its own processors remain the largest customer for this new facility. Furthermore, as Lip-Bu Tan revealed on LinkedIn this week, Elon Musk has commissioned Intel to design, manufacture, and package custom chips for SpaceX, xAI, and Tesla, with the project located at its Terafab facility in Texas. However, financial details and timelines for this project have not yet been disclosed.

## IPU Collaboration Continues to Deepen; Google's In-House Chip Strategy Advances in Parallel

In addition to the CPU collaboration, Google and Intel also reaffirmed their ongoing work in infrastructure processing units (IPUs) on Thursday.

The two companies have been jointly developing this programmable accelerator since 2022, designed to offload network, storage, and security functions from the main CPU. Google told CNBC that this chip was an industry first when they initially collaborated four years ago, aiming to help customers more fully utilize the main CPU's computing power in traditional data centers by taking over "overhead" tasks such as network traffic routing, storage management, data encryption, and running virtualization software.

Notably, Google is simultaneously advancing its self-developed chip strategy. The company has been developing its own AI accelerator, the Tensor Processing Unit (TPU), for over a decade and launched its Arm-based self-designed CPU, Axion, in 2024, bypassing Intel's dominant x86 architecture in its core architectural choices. This means that the expanded cooperation with Intel is a strategic supplement within Google's diversified chip deployment framework, rather than a wholesale bet on a single supplier.

### Related Stocks

- [NVDX.US](https://longbridge.com/en/quote/NVDX.US.md)
- [GOOG.US](https://longbridge.com/en/quote/GOOG.US.md)
- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md)
- [04335.HK](https://longbridge.com/en/quote/04335.HK.md)
- [GOOW.US](https://longbridge.com/en/quote/GOOW.US.md)
- [XSD.US](https://longbridge.com/en/quote/XSD.US.md)
- [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md)
- [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md)
- [588780.CN](https://longbridge.com/en/quote/588780.CN.md)
- [512480.CN](https://longbridge.com/en/quote/512480.CN.md)
- [588170.CN](https://longbridge.com/en/quote/588170.CN.md)
- [INTC.US](https://longbridge.com/en/quote/INTC.US.md)
- [INTW.US](https://longbridge.com/en/quote/INTW.US.md)
- [XLK.US](https://longbridge.com/en/quote/XLK.US.md)
- [159516.CN](https://longbridge.com/en/quote/159516.CN.md)
- [512760.CN](https://longbridge.com/en/quote/512760.CN.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [GOOGL.US](https://longbridge.com/en/quote/GOOGL.US.md)
- [GGLL.US](https://longbridge.com/en/quote/GGLL.US.md)

## Related News & Research

- [Intel and Google to double down on AI CPUs with expanded partnership](https://longbridge.com/en/news/282200227.md)
- [Google announcing $30 million funding globally over next three years to help global hotlines - blog](https://longbridge.com/en/news/281863393.md)
- [Intel and Google Deepen Collaboration to Advance Ai Infrastructure with Xeon Cpus and Custom Ipus](https://longbridge.com/en/news/282231249.md)
- [Google wants more Intel inside ... its datacenters, taps Chipzilla for more SmartNICs](https://longbridge.com/en/news/282239789.md)
- [Google makes it easy to deepfake yourself](https://longbridge.com/en/news/282182025.md)