
A comprehensive understanding of "Google Chain": AI full-stack innovation, TPU + OCS co-creating the next generation of intelligent computing network

Zhongtai Securities believes that the key technological variable for the outbreak of the "Google Chain" in this round is the comprehensive introduction of OCS optical switching technology. Essentially, it transmits data directly through physical optical paths, completely abandoning the "optical-electrical-optical" signal conversion process. By deeply integrating its self-developed TPU chips with OCS (Optical Circuit Switch) technology, Google has not only broken through the energy efficiency and scalability bottlenecks of traditional data centers but has also established new architectural standards for the next generation of intelligent computing networks. Google has built an AI moat around its full-stack advantages of chips (TPU) - networks (OCS) - models (Gemini) - applications (cloud computing/search/advertising, etc.)
Against the backdrop of an increasingly heated AI arms race, Google is building a unique computing power moat through its "full-stack" innovation from chips to networks.
According to Zhongtai Securities, the key technological variable driving this round of "Google Chain" explosion is the comprehensive introduction of OCS (Optical Circuit Switching) technology. By deeply integrating its self-developed TPU chips with OCS technology, Google has not only broken through the energy efficiency and scalability bottlenecks of traditional data centers but also established new architectural standards for the next generation of intelligent computing networks.
The deep coupling of TPU and OCS not only supports the efficient iteration of large models like Gemini but also directly drives the incremental demand in upstream optical modules (especially 1.6T), MEMS chips, optical devices, and other industry chain segments. AI data centers are evolving from static architectures to dynamic photonic interconnections.

TPU v7 "Ironwood" Volume Release: The Dominant Force in the ASIC Market
Zhongtai Securities believes that Google AI has built a moat around its full-stack advantages in chips (TPU) — networks (OCS) — models (Gemini) — applications (cloud computing/search/advertising, etc.).
Since Google established the Google Brain laboratory in 2011 and began its foray into AI, a series of influential AI research projects have emerged, including the release of the Transformer architecture in 2017 and the launch of the multimodal large model Gemini in 2023. Google has gradually integrated AI into its diverse business processes, which have provided massive amounts of data for training and refining AI.
Analysts emphasize that the leapfrog development of Google's self-developed chips is the core of its computing power strategy.
The soon-to-be-released TPU v7 (Ironwood) has achieved a qualitative leap in performance, with single-chip computing power more than ten times that of the previous generation TPU v5p, and peak bandwidth reaching 7.4 TB/s.
In terms of cluster architecture, Ironwood continues to use and optimize the 3D Torus topology. This architecture allows for the dynamic combination of multiple "4×4×4" cubic building blocks, with a single cluster scale expandable to 9,216 chips. To match this extremely high computing power density, TPU v7 has begun to configure 1.6T optical modules, which has also raised market expectations for the demand for high-speed optical modules.

**Supply chain research indicates that by 2026, Google TPU will become the main force in the global self-developed ASIC market, with expected shipments far exceeding competitors like AWS Trainium or Microsoft Maia. With the dual pull of NVIDIA's GB200 and Google's TPU v7, the industry's demand for 1.6T optical modules is expected to be revised upward to over 20 million units **

OCS: The Key Technology to Break the Bottleneck of Traditional Optical Switching
Zhongtai Securities stated that Google's core logic in the large-scale introduction of OCS (Optical Circuit Switch) in AI data centers is to solve the power consumption and efficiency challenges brought by scale-out.
Traditional data center architectures are failing. In the traditional Clos architecture, as the scale of computing clusters expands exponentially, the Ethernet Packet Switch (EPS) based on electrical signals faces severe power consumption and heat dissipation issues, along with expensive wiring costs. According to Cisco's estimates, the total power consumption of data center switching systems has increased 22 times over the past decade.
Google's introduction of OCS essentially transmits data directly through physical optical paths, completely abandoning the "optical-electrical-optical" signal conversion process.
Among them, OCS is the key to achieving server disaggregation, allowing computing resources to be dynamically orchestrated across racks, combining computing power like building blocks, thus breaking through the resource waste bottleneck of static racks. In the Ironwood cluster, 48 OCS switches connect 9,216 TPU chips, creating a low-latency, high-bandwidth dynamic photonic network.

Data proves the superiority of this technological route: After introducing the customized OCS network, Google's network throughput increased by 30%, power consumption decreased by 40%, network downtime was reduced by 50 times, and most importantly, it reduced capital expenditures by 30%.

Deconstructing Google's OCS: The Unique Value of MEMS Technology and Customized Optical Devices
Zhongtai Securities stated that to understand the investment value of the "Google Chain," one must understand the physical composition of OCS.
Currently, Google's mainstream Palomar OCS is based on MEMS (Micro-Electro-Mechanical Systems) solutions, featuring 136 optical path channels (with 128 channels actually in use). Its core working principle is to reflect optical signals through a 2D MEMS micro-mirror array, achieving millisecond-level optical path switching without the need for optical transceivers for electrical signal conversion.

This system brings a series of unique hardware requirements First, regarding customized optical modules, Google has integrated a circulator into the optical module, enabling bidirectional transmission over a single optical fiber. This not only reduces the required number of ports and optical cables by 40% compared to traditional fat tree architectures but also creates an incremental market for circulators.
Secondly, there are core optical components, including MEMS arrays, collimators, and 2D lens arrays, which have a very high individual value. Additionally, although Google is currently promoting MEMS solutions, it is also exploring new technological paths such as liquid crystal, piezoelectric ceramics, and silicon photonic waveguides, providing potential entry opportunities for innovators in the supply chain.

The rise of OCS technology has brought a new incremental segment to the optical communication industry chain. As other cloud service providers like Microsoft and Meta also begin to explore OCS applications, Lightcounting predicts that the OCS market will grow at a compound annual growth rate of 28% from 2024 to 2029, marking a dual explosion period of technology and demand in the industry.


