---
title: "The Moment of 'Division of Labor' for AI Chips! Why Does Google's Eighth-Generation TPU Come in Two Models?"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/283677689.md"
description: "Google answers a single question—efficiency—with two chips. The TPU 8t enhances massive-scale training and throughput efficiency by leveraging SparseCore, FP4, and a new network architecture to significantly boost computational scalability; the TPU 8i focuses on low-latency inference, improving concurrency and decoding efficiency through ultra-large SRAM and CAE. Both share a unified software stack and are deeply integrated into cloud AI infrastructure, directly addressing the divergence of AI workloads and the trend toward optimizing compute costs"
datetime: "2026-04-22T13:39:27.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/283677689.md)
  - [en](https://longbridge.com/en/news/283677689.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/283677689.md)
---

# The Moment of 'Division of Labor' for AI Chips! Why Does Google's Eighth-Generation TPU Come in Two Models?

Google is pushing its AI chip strategy into a new phase.

At the Google Cloud Next 2026 conference held in Las Vegas on Wednesday, Google Cloud unveiled two new models of its eighth-generation Tensor Processing Unit (TPU): the TPU 8t, designed specifically for training, and the TPU 8i, optimized for inference. This marks Google's first split of training and inference tasks onto separate chips, signaling a major shift in its AI hardware roadmap.

Both chips are scheduled for official release later in 2026. Compared to the seventh-generation Ironwood TPU released last November, the TPU 8t delivers 2.8 times the performance at the same price point, while the TPU 8i offers an 80% performance increase. Both chips have improved performance-per-watt by more than double compared to the previous generation: the TPU 8t by 124% and the TPU 8i by 117%.

Amin Vahdat, Senior Vice President and Chief Technology Officer of AI and Infrastructure at Google, stated that with the rise of AI agents, "the industry will benefit from chips specifically optimized for the distinct needs of training and inference." Alphabet CEO Sundar Pichai also noted in a blog post that this architecture aims to "provide the massive throughput and low latency required to run millions of agents simultaneously in a cost-effective manner."

## Why Split Into Two Chips?

Splitting the eighth-generation TPU into two models is a direct response to the increasingly divergent trends in AI workloads. Pre-training, post-training, and real-time inference have significantly differentiated in their computational characteristics: training tasks pursue extreme throughput and scale expansion, while inference tasks are more sensitive to latency and concurrency. A single chip struggles to optimize efficiency for both scenarios simultaneously.

According to Google's technical blog, the design philosophy of the eighth-generation TPU revolves around three pillars: scalability, reliability, and efficiency. While both chips share the core DNA of Google's AI software stack, each has been specially optimized for different bottlenecks.

Both chips integrate Axion CPUs based on the Arm architecture to eliminate host-side bottlenecks caused by data preprocessing latency, ensuring that TPU compute units remain fully utilized.

## TPU 8t: The Compute Engine for Massive-Scale Training

Positioned as a dedicated accelerator for pre-training and embedding-intensive workloads, the TPU 8t can "compress state-of-the-art model development cycles from months to weeks," according to Google.

In terms of scale, up to 9,600 TPU 8t chips can be combined into a single superpod, and distributed training can be extended across a single cluster of over 1 million TPU chips using the JAX and Pathways frameworks.

At the chip level, the TPU 8t introduces three key technological innovations.

First is the SparseCore accelerator, which handles irregular memory access patterns in embedding lookups, offloading data-dependent global aggregation operations from Matrix Multiplication Units (MXUs) to avoid the zero-operation bottlenecks common in general-purpose chips.

Second is native FP4 support, which doubles MXU throughput via 4-bit floating-point numbers while reducing the energy consumption of data movement, allowing larger model layers to reside in local hardware buffers.

Third is a more balanced Vector Processing Unit (VPU) expansion design, enabling better pipeline overlap between vector operations like quantization and softmax with matrix multiplication, thereby improving overall chip utilization.

At the network level, Google introduced a new Virgo network architecture for the TPU 8t, utilizing high-radix switches and a flat, two-layer non-blocking topology. This increases Data Center Network (DCN) bandwidth by up to four times and Inter-Chip Interface (ICI) bandwidth by two times compared to the previous generation. A single Virgo network can connect over 134,000 TPU 8t chips, providing up to 47 petabits per second of non-blocking bidirectional bandwidth, with total compute power exceeding 1.6 million ExaFlops.

Regarding storage, the TPU 8t introduces TPUDirect RDMA and TPUDirect Storage technologies. These allow data to bypass the host CPU and transfer directly between the TPU's high-bandwidth memory (HBM), network cards, and high-speed storage. Storage access speeds are 10 times faster than the seventh-generation Ironwood TPU, ensuring that MXUs remain fully utilized when processing large-scale multimodal datasets.

## TPU 8i: The Low-Latency Expert for High-Concurrency Inference

Designed for post-training stages and high-concurrency inference scenarios, the TPU 8i places its architectural focus on reducing latency and enhancing concurrency per chip.

On-chip memory is the most significant hardware feature of the TPU 8i. Each chip integrates 384MB of Static Random Access Memory (SRAM)—three times that of the previous Ironwood generation—allowing larger KV Caches to remain entirely on-chip. This significantly reduces idle waiting time for cores during long-context decoding, a critical factor for AI tasks requiring multi-step reasoning.

The TPU 8i also introduces a Collective Acceleration Engine (CAE), specifically accelerating reduction and synchronization steps in autoregressive decoding and "Chain-of-Thought" processing. Each TPU 8i chip contains two Tensor Cores (TCs) and one CAE chiplet, replacing the four SparseCores found in the previous Ironwood generation. On-chip collective operation latency is reduced by five times, directly boosting the throughput required to run millions of agents simultaneously.

Regarding network topology, the TPU 8i abandons the 3D torus structure used by the TPU 8t in favor of a new Boardfly interconnect topology. In a 1,024-chip configuration, the 3D torus requires up to 16 hops between any two chips; the Boardfly topology compresses the maximum hop count to 7 through high-radix design, reducing the network diameter by 56% and improving all-to-all communication latency by up to 50%. This is particularly beneficial for frequent cross-chip token routing in Mixture of Experts (MoE) and inference models. The Boardfly adopts a hierarchical structure, scaling from four-chip building blocks up to complete Pods of up to 1,152 chips, with inter-group connectivity achieved via Optical Circuit Switches (OCS).

## Software Ecosystem and Market Significance

Google emphasizes that realizing hardware performance depends on the synergy of a supporting software stack.

The eighth-generation TPU continues the software framework established by the seventh-generation Ironwood, supporting mainstream frameworks such as JAX, PyTorch, Keras, and vLLM, and offering the Pallas custom kernel language to fully unlock the hardware potential of SparseCore and CAE.

Google also announced that native PyTorch support for TPU has now entered the preview stage, allowing users to migrate existing PyTorch models to TPU for execution without modifying code.

From a market perspective, Google's dual-chip strategy directly addresses the cost pressures of AI infrastructure. Train

### Related Stocks

- [GOOGL.US](https://longbridge.com/en/quote/GOOGL.US.md)
- [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [GOOW.US](https://longbridge.com/en/quote/GOOW.US.md)
- [GGLL.US](https://longbridge.com/en/quote/GGLL.US.md)
- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [CLOU.US](https://longbridge.com/en/quote/CLOU.US.md)
- [GOOG.US](https://longbridge.com/en/quote/GOOG.US.md)

## Related News & Research

- [Google Unveils New AI Super-Chips To Slash Costs, Rival Nvidia](https://longbridge.com/en/news/283688748.md)
- [AI Overviews are coming to your Gmail at work](https://longbridge.com/en/news/283702020.md)
- [GOOGLE: LAUNCHING TWO SPECIALIZED TPU CHIPS](https://longbridge.com/en/news/283664958.md)
- [Google Expands TPU Push As Demand Surges, Targets AI Inference Market](https://longbridge.com/en/news/283391187.md)
- [Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win](https://longbridge.com/en/news/283660813.md)