--- title: "AMD steps into the era of rack-level AI infrastructure! Strong partnership with Tianhong Technology to build the Helios computing cluster" type: "News" locale: "en" url: "https://longbridge.com/en/news/279383938.md" description: "AMD collaborates with Tianhong Technology to launch the Helios rack-level AI computing infrastructure, aimed at competing with NVIDIA's NVL72 platform. Helios will become the core of AI data centers, supporting large-scale AI training and inference tasks. The platform integrates high-performance network architecture and liquid cooling units to enhance computing efficiency. Tianhong Technology is responsible for the research and development and manufacturing of high-performance network switches in the Helios architecture, with large-scale launch expected by the end of 2026" datetime: "2026-03-17T07:17:18.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/279383938.md) - [en](https://longbridge.com/en/news/279383938.md) - [zh-HK](https://longbridge.com/zh-HK/news/279383938.md) --- # AMD steps into the era of rack-level AI infrastructure! Strong partnership with Tianhong Technology to build the Helios computing cluster According to Zhitong Finance APP, AMD (AMD.US) has announced a significant partnership with Tianhong Technology (CLS.US) to launch its new Helios rack-level AI computing infrastructure platform, which will compete with NVIDIA's NVL72 rack-level AI platform in the global AI data center market. For AMD, which aims to capture a significant share of the trillion-dollar AI core computing cluster market currently dominated by NVIDIA (NVDA.US) with up to 90% market share, Helios is crucial for AMD's revenue and profit prospects. AMD is gradually shifting its competitive focus to integrated cabinet-level systems similar to NVIDIA's NVL72, with the large-scale launch of the Helios AI computing cluster planned for the end of 2026, essentially going head-to-head with NVIDIA's "rack-level AI infrastructure." In a statement, the two companies indicated that when this AI computing platform is launched, Tianhong Technology (Celestica) will be responsible for the research, design, and manufacturing of high-performance network switches for vertical large-scale expansion within the AMD Helios rack-level AI computing cluster architecture. AMD Helios is a complete set of high-performance, open-standard AI rack-level infrastructure platforms designed for ultra-large-scale AI training and inference tasks in AI data centers. Rack-level AI architecture is currently the most popular cluster computing method, where the entire rack, rather than a single CPU/GPU server, serves as the fundamental computing unit for massive AI workloads. It integrates core computing capabilities such as AI GPUs/AI ASICs, high-performance network architecture, and liquid cooling units into a single AI computing infrastructure system, efficiently training large language models (LLM) or handling massive AI workloads based on large models. The two companies also stated that these vertically scalable switches will utilize state-of-the-art network chips to achieve high-speed interconnect systems between the next-generation AMD Instinct MI450 series AI GPUs, thereby providing cutting-edge computing capabilities optimized for large-scale AI computing infrastructure clusters. "The Helios rack-level AI solution represents a new blueprint for AI computing infrastructure, enabling customers to deploy AI data centers at scale with the performance, efficiency, and flexibility required for next-generation massive AI workloads," said Forrest Norrod, Executive Vice President and General Manager of AMD's Data Center Solutions Business Unit, in the statement. The two companies noted that they are collaborating to support the one-click high-efficiency deployment of Helios in cloud computing platforms, enterprise organizations, and large research environments. Thanks to the latest developments in their joint efforts to ramp up Helios production capacity, Celestica's stock price rose about 3% at the close of U.S. markets on Monday, while AMD's stock price briefly increased by over 3%, ultimately closing up 1.7%. It is understood that the AMD Helios rack-level AI computing infrastructure is expected to begin bulk shipments to major cloud computing clients such as Microsoft and Amazon by the end of 2026 **Strong Alliance to Combat "NVIDIA Blackwell Series"** As AMD partners with Celestica to accelerate the market launch of the Helios rack-scale AI platform, it coincides with AMD's collaboration with several tech leaders to counter NVIDIA's dominance in vertical integrated AI computing infrastructure solutions. Previously, AMD announced partnerships with Huizhi Technology and Broadcom, aiming to provide open and rack-scale artificial intelligence computing infrastructure for high-performance computing clusters and large AI data centers on a large scale, while striving to accelerate the global progress of "Sovereign AI" research. Huizhi Technology will be one of the first system suppliers to adopt AMD's "Helios" rack-scale AI computing cluster architecture, and AMD and Huizhi Technology will integrate a customized HPE Juniper Networking high-performance scale-up switch based on deep collaboration with ASIC and high-performance network infrastructure leader Broadcom. This large AI computing system aims to simplify the deployment of larger-scale AI computing infrastructure clusters and provide a more cost-effective and energy-efficient AMD rack-scale AI computing cluster solution beyond NVIDIA's Blackwell series. Helios essentially positions AMD against NVIDIA's Blackwell NVL72/GB200 NVL72 in the rack-scale AI infrastructure space. Both systems utilize 72 GPUs + CPUs + high-speed interconnects + liquid cooling + rack-scale system engineering as the basic unit for AI workloads, rather than treating a single server as the core product. AMD officially defines Helios as an open rack architecture based on OCP Open Rack Wide, aimed at large-scale training and inference; NVIDIA defines the GB200 NVL72 as a liquid-cooled rack-scale platform composed of 36 Grace CPUs + 72 Blackwell GPUs. In other words, Helios is not just "another batch of MI450 GPUs," but AMD's first real attempt to confront NVIDIA's NVL72 system with a complete cabinet system. Compared to AMD's previous generation AI GPU series products, the performance leap of Helios is significant. AMD's official benchmarks state clearly: Helios can achieve up to a 36-fold performance improvement compared to the previous generation AMD AI computing platform, indicating that AMD's approach to AI computing infrastructure has shifted from "selling stronger performance GPU cards" to "selling complete AI factories," packaging GPUs, CPUs, NICs, liquid cooling, network topology, and ROCm into a comprehensive AI computing solution for sale. The biggest selling point of Helios is its memory and open interconnect. AMD's official statement shows that the 72-GPU Helios can provide up to 2.9 exaFLOPS FP4, 1.4 exaFLOPS FP8, 31TB HBM4, 1.4PB/s aggregated memory bandwidth, and 260TB/s scale-up interconnect bandwidth; Compared to NVIDIA's GB200 NVL72, Helios is significantly more aggressive in terms of memory capacity, raw scale-up bandwidth, and open rack design, making it more attractive for long-context, large-parameter models and bandwidth-sensitive training/inference systems; AMD even publicly claims that Helios's memory capacity is 50% higher than NVIDIA's next-generation computing platform—the Vera Rubin system. **Why did AMD choose Celestica?** The reason AMD needs to partner with Celestica is quite practical: the bottleneck of rack-level AI systems is no longer just in the GPU, but in high-speed scale-up switches, liquid cooling engineering, manufacturing yield, delivery capability, and supply chain resilience. AMD's official statement clearly states that Celestica is responsible for the R&D, design, and manufacturing of Helios scale-up networking switches, which are based on UALoE and directly determine whether the MI450 cluster can run stably in large-scale AI clusters. The value of Celestica lies not in "contract manufacturing an ordinary component," but in helping AMD fill the most challenging gap in the transition from a chip company to a system company—engineering and productizing the open computing cluster architecture and delivering it according to the requirements of hyperscalers. Essentially, this aligns with NVIDIA's recent emphasis on extreme collaboration between cabinet systems, high-performance networks, and operational software, reflecting the same industrial logic. Celestica is a Canada-based electronic manufacturing services (EMS/ODM) and infrastructure solutions provider. It not only engages in traditional hardware assembly but also plays a key role in the design, manufacturing, and integration of AI data center infrastructure-related products (such as network switches, servers, rack-level solutions, high-bandwidth network components, etc.); as cloud service providers and large tech companies (like Google, Meta, Amazon, etc.) ramp up the construction of AI data centers, the demand for high-speed network connections, custom hardware, and rack-level integrated solutions has surged. Celestica is a core supplier of high-performance network switches, servers, ASIC/TPU-related hardware modules, and integrated services required by these large data centers. The company's stock price is expected to rise by as much as 220% throughout 2025 ### Related Stocks - [FTXL.US](https://longbridge.com/en/quote/FTXL.US.md) - [PSI.US](https://longbridge.com/en/quote/PSI.US.md) - [SOXQ.US](https://longbridge.com/en/quote/SOXQ.US.md) - [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md) - [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md) - [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md) - [XSD.US](https://longbridge.com/en/quote/XSD.US.md) - [IGPT.US](https://longbridge.com/en/quote/IGPT.US.md) - [AMDL.US](https://longbridge.com/en/quote/AMDL.US.md) - [SMH.US](https://longbridge.com/en/quote/SMH.US.md) - [AMUU.US](https://longbridge.com/en/quote/AMUU.US.md) - [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md) - [AMD.US](https://longbridge.com/en/quote/AMD.US.md) ## Related News & Research - [The AI Stock Wall Street Can't Stop Talking About in 2026](https://longbridge.com/en/news/282407351.md) - [DeDora Capital Inc. Reduces Position in Advanced Micro Devices, Inc. $AMD](https://longbridge.com/en/news/282310691.md) - [Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI](https://longbridge.com/en/news/282774224.md) - [Key facts: ARK Invest Sells about $10.52M AMD Shares; AMD Updates GAIA](https://longbridge.com/en/news/282585131.md) - [Linux 7.0 debuts as Linus Torvalds ponders AI's bug-finding powers](https://longbridge.com/en/news/282579651.md)