---
title: "Insights into the Future of AI Chips from the 'Chip Olympics': Interconnect Bottlenecks Emerge as Packaging Innovation Becomes the Next Battleground"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/283000414.md"
description: "As HBM4 bandwidth approaches its limits and GPU scales continue to expand, bottlenecks in chip-to-chip communication and memory bandwidth are becoming increasingly apparent, driving the rapid convergence of solutions such as optical interconnects, CPO, DWDM, and UCIe. NVIDIA, Broadcom, Marvell, and other vendors have clarified their next-generation data center interconnect roadmaps, while TSMC's aLSI, Intel's UCIe-S, and multiple AI accelerator initiatives are competing around advanced packaging. Overall, computing power growth is increasingly dependent on system-level packaging and interconnect innovation, making packaging the core battlefield for AI chip competition"
datetime: "2026-04-16T12:31:00.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/283000414.md)
  - [en](https://longbridge.com/en/news/283000414.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/283000414.md)
---

# Insights into the Future of AI Chips from the 'Chip Olympics': Interconnect Bottlenecks Emerge as Packaging Innovation Becomes the Next Battleground

The semiconductor industry's annual premier circuit conference, ISSCC 2026—dubbed the "Chip Olympics"—released a series of technical signals with direct market significance: Samsung's HBM4 performance data was disclosed for the first time, NVIDIA and Broadcom's optical interconnect roadmaps became clearer, and architectural details of AI accelerators from giants like AMD and Microsoft were also revealed.

According to top semiconductor analysis firm SemiAnalysis, the HBM4 technology data presented by Samsung at this year's conference shows a bandwidth of 3.3 TB/s, with pin speeds reaching up to 13 Gb/s, exceeding JEDEC standards by more than double. This indicates that Samsung is narrowing the technological gap with SK Hynix. Meanwhile, the **DWDM optical interconnect solution proposed by NVIDIA at the conference aligns closely with specifications released simultaneously by the OCI MSA industry alliance, further clarifying the technical direction for next-generation AI data center interconnects.**

If Samsung can continue to improve yield and reliability for HBM4, it will pose a substantive challenge to SK Hynix's market dominance; meanwhile, the gradual convergence of optical interconnect standards implies that investment windows for related supply chains are opening.

## ISSCC: The Annual Technology Barometer of the Semiconductor Industry

First, a brief introduction to ISSCC (International Solid-State Circuits Conference). It is one of the three top academic conferences in the semiconductor field, alongside IEDM and VLSI. Compared to the latter two, ISSCC places greater emphasis on circuit integration and implementation, with almost every paper including circuit diagrams and measured data. It serves as an important window for the industry to observe the actual progress of chip technology deployment.

This year's ISSCC is particularly noteworthy. According to SemiAnalysis, while papers presented at previous ISSCC events had varying degrees of direct impact on the industry, 2026 is distinctly different—many papers are highly relevant to current market hotspots, covering HBM4, LPDDR6, GDDR7, NAND flash, Co-Packaged Optics (CPO), advanced chip-to-chip interconnects, and processor architectures from MediaTek, AMD, NVIDIA, Microsoft, and other vendors.

## Samsung HBM4: Performance Breakthrough, but Yield and Cost Remain Concerns

Samsung is the only one of the three major memory manufacturers to publish an HBM4 technology paper at this ISSCC.

Its showcased HBM4 features a 12-layer stack, 36 GB capacity, and 2048 IO pins, achieving a bandwidth of 3.3 TB/s. The core DRAM utilizes a sixth-generation 10nm-class (1c) process, while the logic base chip employs SF4 advanced logic process.

The most critical architectural change lies in the separation of the base chip process. By migrating the base chip from the DRAM process to the SF4 logic process, Samsung reduced the operating voltage (VDDQ) from 1.1V in HBM3E to 0.75V—a drop of 32%—while achieving higher transistor density and better area efficiency. Combined with adaptive body bias (ABB) control technology and a fourfold increase in TSV count, Samsung's HBM4 achieves pin speeds of 11 Gb/s under a core voltage below 1V, reaching up to 13 Gb/s, far surpassing the 6.4 Gb/s upper limit specified in the JEDEC HBM4 standard.

However, this technical route comes with significant costs. The SF4 process is more expensive than the TSMC N12 process used by SK Hynix and Micron's internal CMOS base solution. More critically, Samsung's 1c process frontend yield was only about 50% last year. Although it has been continuously improving, the low yield puts pressure on HBM4 gross margins. SemiAnalysis notes that Samsung's historical profit margins in HBM have already been lower than SK Hynix's, a situation that remains challenging in the HBM4 generation.

In terms of reliability and stability, Samsung still lags behind SK Hynix, but the trend of catching up at the technical level is quite evident.

## LPDDR6 and GDDR7: Samsung and SK Hynix Focus on Different Areas

Both Samsung and SK Hynix showcased LPDDR6 chips at this ISSCC. Both products support data rates up to 14.4 Gb/s, an improvement of approximately 35% over the fastest LPDDR5X.

There are differences in low-voltage performance. Samsung's LPDDR6 can achieve 12.8 Gb/s at 0.97V, whereas SK Hynix reaches only 10.9 Gb/s at 0.95V, indicating that Samsung has an advantage in power efficiency at lower pin speeds. Samsung also showcased an LPDDR6 PHY based on the SF2 process, supporting nearly 50% reduction in read power consumption in efficiency mode.

SK Hynix's highlight was GDDR7. Its GDDR7 based on the 1c process can reach up to 48 Gb/s at 1.2V, and even at low voltages of 1.05V/0.9V, it achieves 30.3 Gb/s, which is higher than the 30 Gb/s memory equipped in the RTX 5080. Bit density reaches 0.412 Gb/mm², significantly outperforming Samsung's 1b process at 0.309 Gb/mm².

Notably, SemiAnalysis points out that NVIDIA's previously announced Rubin CPX large-context AI processor equipped with 128GB GDDR7 has basically disappeared from the 2026 roadmap, with NVIDIA shifting focus to the launch of the Groq LPX solution.

## Optical Interconnects: NVIDIA's DWDM Route Converges with Industry Standards

Optical interconnects were another core topic at this ISSCC, directly relating to the networking architecture of next-generation AI accelerator clusters.

NVIDIA proposed an optical interconnect solution based on DWDM (Dense Wavelength Division Multiplexing) at the conference, adopting an architecture with 32 Gb/s per wavelength and multiplexing 8 wavelengths, using the 9th wavelength for clock forwarding to simplify SerDes design and improve energy efficiency. This aligns closely with specifications released by the OCI MSA (Optical Computing Interconnect Multi-Source Agreement), established just before OFC 2026. OCI MSA focuses on 200 Gb/s bidirectional links, utilizing a 4-wavelength 50G NRZ DWDM solution for scale-up interconnects.

This development clarifies previous market confusion: NVIDIA's COUPE optical engine targets 200G PAM4 DR optics for scale-out switching, while DWDM is used for scale-up interconnects; the two routes run parallel without contradiction.

Broadcom showcased a 6.4T MZM optical engine composed of 64 channels of approximately 100G PAM4, completing test verification in the Tomahawk 5 51.2T CPO system. Broadcom stated it will switch to the COUPE solution in the future, but existing products will continue to use other packaging routes.

Marvell showcased an 800G Coherent-Lite transceiver designed for data center campus scenarios, with power consumption of only 3.72 pJ/b (excluding silicon photonics), roughly half that of traditional coherent transceivers, and latency under 300 nanoseconds over 40km of fiber.

## Advanced Packaging and Chip-to-Chip Interconnects: Multiple Technologies Compete

As multi-chip designs become mainstream, chip-to-chip interconnects have become a performance bottleneck. Several companies showcased their respective solutions at this ISSCC.

TSMC demonstrated Active Local Silicon Interconnect (aLSI) technology, introducing edge-triggered transceiver (ETT) circuits in bridge chips to improve signal integrity, compressing PHY depth from 1043μm to 850μm, with total power consumption of only 0.36 pJ/b. SemiAnalysis noted that the test carrier's packaging design aligns closely with AMD's MI450 GPU, suggesting aLSI may be the packaging solution for AMD's next-generation products.

Intel showcased a chip-to-chip interface compatible with UCIe-S standards, based on a 22nm process, capable of achieving up to 48 Gb/s/channel interconnects over distances of 30mm on standard organic packages. This is considered a prototype solution for future Diamond Rapids Xeon CPUs.

Microsoft disclosed its chip-to-chip interconnect details, based on TSMC's N3P process, with system power consumption of 0.33 pJ/b at 24 Gb/s. SemiAnalysis believes this is precisely the custom high-bandwidth interconnect connecting two compute dies in Microsoft's Cobalt 200 CPU.

## AI Accelerators: AMD, Microsoft, and Rebellions Architecture Details Revealed for the First Time

AMD detailed improvements of the MI355X GPU relative to the MI300X at the conference. The core compute die (XCD) migrated from N5 to N3P process, doubling matrix throughput without increasing area; the IO die (IOD) was consolidated from 4 chips to 2, reducing chip-to-chip interconnect overhead and lowering interconnect power consumption by approximately 20%.

Microsoft's Maia 200 is another important AI accelerator disclosed at this conference. As the last product among mainstream HBM accelerators to adhere to photolithography-level monolithic design, the Maia 200 is based on TSMC's N3P process, integrating over 10 PFLOPS of FP4 compute power, 6 HBM3E stacks, and 28 bidirectional 400 Gb/s chip-to-chip links. Its packaging scheme resembles NVIDIA's H100, utilizing a CoWoS-S interposer.

South Korean AI chip startup Rebellions publicly disclosed architectural details of its Rebel100 accelerator for the first time. The chip adopts Samsung's SF4X process and I-CubeS advanced packaging, equipped with 4 compute dies and 4 HBM3E stacks, and integrates silicon capacitors to improve HBM3E power delivery quality. SemiAnalysis notes that Samsung may be pushing this packaging technology, not yet adopted by mainstream AI accelerators, into the market by bundling I-CubeS packaging with front-end processes and using HBM supply conditions as leverage.

### Related Stocks

- [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md)
- [INTC.US](https://longbridge.com/en/quote/INTC.US.md)
- [AVGO.US](https://longbridge.com/en/quote/AVGO.US.md)
- [MRVL.US](https://longbridge.com/en/quote/MRVL.US.md)
- [INTW.US](https://longbridge.com/en/quote/INTW.US.md)
- [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md)
- [TSMU.US](https://longbridge.com/en/quote/TSMU.US.md)
- [PSI.US](https://longbridge.com/en/quote/PSI.US.md)
- [FTXL.US](https://longbridge.com/en/quote/FTXL.US.md)
- [AVGU.US](https://longbridge.com/en/quote/AVGU.US.md)
- [XLK.US](https://longbridge.com/en/quote/XLK.US.md)
- [XSD.US](https://longbridge.com/en/quote/XSD.US.md)
- [IXN.US](https://longbridge.com/en/quote/IXN.US.md)
- [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md)
- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [MVLL.US](https://longbridge.com/en/quote/MVLL.US.md)
- [AVL.US](https://longbridge.com/en/quote/AVL.US.md)
- [NVDY.US](https://longbridge.com/en/quote/NVDY.US.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [AVGG.US](https://longbridge.com/en/quote/AVGG.US.md)
- [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md)
- [AVGW.US](https://longbridge.com/en/quote/AVGW.US.md)
- [AVGX.US](https://longbridge.com/en/quote/AVGX.US.md)
- [SOXQ.US](https://longbridge.com/en/quote/SOXQ.US.md)

## Related News & Research

- [Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI](https://longbridge.com/en/news/282774224.md)
- [PREVIEW-TSMC likely to book fourth straight quarter of record profit on insatiable AI demand](https://longbridge.com/en/news/282478726.md)
- [Meta extends custom chips deal with Broadcom to power AI ambitions](https://longbridge.com/en/news/282752154.md)
- [TSMC sales beat estimates despite the conflict in Iran](https://longbridge.com/en/news/282368619.md)
- [$92 Million In Banned AI Chips Went From Super Micro To Little Known Chinese Tech Company](https://longbridge.com/en/news/282497405.md)