--- title: "The State of HBM4 Chronicled at CES 2026" type: "News" locale: "zh-CN" url: "https://longbridge.com/zh-CN/news/272330630.md" description: "High-bandwidth memory (HBM4) was a key focus at CES 2026, with Micron, Samsung, and SK hynix showcasing advancements. HBM4 addresses the memory wall in AI systems, promising significant improvements in bandwidth and efficiency. SK hynix revealed a 16-layer HBM4 device with 48GB capacity, while Samsung introduced a unique 1c DRAM process for better energy efficiency. Micron is also ramping up production of its HBM4 chips. The competition among these companies highlights the ongoing evolution in memory technology for AI applications." datetime: "2026-01-12T23:20:38.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/272330630.md) - [en](https://longbridge.com/en/news/272330630.md) - [zh-HK](https://longbridge.com/zh-HK/news/272330630.md) --- > 支持的语言: [English](https://longbridge.com/en/news/272330630.md) | [繁體中文](https://longbridge.com/zh-HK/news/272330630.md) # The State of HBM4 Chronicled at CES 2026 //php echo do\_shortcode('\[responsivevoice\_button voice="US English Male" buttontext="Listen to Post"\]') ?\> High-bandwidth memory, a critical component in modern AI systems—particularly for running large-scale training models—was a centerpiece at CES 2026, with the memory trio of Micron, Samsung, and SK hynix holding their HBM4 cards. It was all about telegraphing the readiness of HBM4 devices, which address the “memory wall” that threatens to plateau AI scaling. HBM4 is promising a solution to the memory wall—the bottleneck where data processing speeds outpace the ability of memory to feed that data to the processor—by carrying out the most significant architectural overhaul of high-bandwidth memory technology. It is purpose-built for next-generation AI accelerators and data center workloads to deliver major gains in bandwidth, efficiency, and system-level customization. HBM4, the sixth-generation high-bandwidth memory technology, does that by moving beyond incremental speed bumps to a complete redesign of the memory interface; it nearly triples the performance of early HBM3 devices that powered the first wave of the generative AI boom. _HBM4, a generational leap in semiconductor technology, is rewriting the rules of AI infrastructure. (Source: SK hynix)_ HBM4 represents another fundamental shift in memory architecture by integrating logic die and turning the memory stack into a co-processor that can handle basic data before it reaches the main AI processor. That marks the end of the compute-only era while transforming memory from a passive storage bin to an active component. The primary driver of the HBM4 frenzy is Nvidia’s Rubin GPU platform, now in production and positioning itself as the initial and exclusive consumer of early HBM4 devices. Micron, Samsung, and SK hynix have reportedly started delivering HBM4 samples to Nvidia, and they are planning to begin mass manufacturing of HBM4 chips in 2026. ### **SK Hynix at CES 2026** At CES 2026, SK hynix, the current HBM leader commanding more than 50% of the global market, unveiled a 16-layer HBM4 device with 48 gigabytes of capacity. By stacking DRAM up to 16 layers, it significantly boosts both capacity and speed, achieving bandwidth exceeding 2 TB per second. SK Hynix is planning to begin mass production of these HBM4 devices in the third quarter of 2026. _The transition to 16-layer HBM4 stacks presents formidable engineering challenges. (Source: SK_ _H__ynix)_ SK Hynix utilized its proprietary Mass Reflow Molded Underfill (MR-MUF) technology to thin individual DRAM wafers to a staggering 30 µm to fit within JEDEC’s strict 775-µm height limit. Here, MR-MUF heats and interconnects all the vertically stacked chips in HBM products, making it more efficient. Another notable highlight of SK Hynix’s 16-layer HBM4 device is the Korean memory giant’s alliance with TSMC to incorporate 12-nm logic as the base die, serving as the control logic or brain of the HBM4 stack. This shift from memory-based nodes to logic nodes effectively turns HBM4 into a custom memory solution that can be tailored for specific AI workloads. ### **Samsung is back** Unlike SK Hynix, which has joined hands with TSMC for HBM4’s logic die, Samsung has manufactured the logic die in its foundry’s 4-nm process node while handling 3D packaging under one roof. That makes it the only HBM4 supplier with a turnkey solution that controls the entire stack from silicon to final packaging. Moreover, unlike SK Hynix, which employs traditional MR-MUF technology to manufacture 16-layer HBM4, Samsung is racing ahead with hybrid bonding, a process in which copper pads are fused directly to copper without the use of traditional micro-bumps. It significantly reduces stack height and improves thermal dissipation, and the industry sees it as a long-term solution to next-generation manufacturing challenges. However, Samsung’s most notable leap in the HBM4 realm comes with the adoption of 1c DRAM process technology, which significantly improves energy efficiency, according to Samsung sources. That is a critical advantage because data center operators have been struggling with the thermal demands of GPUs exceeding 1,000 watts. According to TrendForce, Samsung has started production of its 1c DRAM with yields reportedly nearing its 80% mass-production target. _After falling behind SK_ _H__ynix in HBM technology, Samsung is attempting to differentiate its HBM4 offerings by enhancing its DRAM and logic die technologies. (Source: Samsung)_ While Samsung has delivered samples for Nvidia’s Rubin AI accelerators and is likely to qualify before SK Hynix and Micron, it has already passed Broadcom’s system-in-package (SiP) testing for Google’s latest-generation TPUs. So, after falling behind SK Hynix in past years, Samsung is bouncing back from its HBM3e setbacks. ### **The HBM4 memory war** Micron Technology, the third member of the HBM troika, has also met Nvidia’s HBM4 specifications for Rubin AI accelerators, and it has delivered final customer samples. The Boise, Idaho-based memory chipmaker has been aggressively expanding manufacturing capacity for its 12-layer, 36 GB HBM devices featuring a 2,048-bit interface. It is targeting a capacity of 15,000 wafers dedicated to HBM4 manufacturing by the end of 2026. Meanwhile, industry reports claim that Nvidia revised the HBM4 specifications for its Rubin GPUs in the third quarter of 2025, raising the required per-pin speed to above 11 Gbps. As a result, Micron, Samsung, and SK Hynix resubmitted HBM4 samples and continue to refine their designs in response to Nvidia’s more stringent requirements. Nevertheless, HBM4 is poised to become a foundational technology that will determine AI accelerators’ ability to train next-generation agentic AI models. No wonder the HBM trio of Micron, Samsung, and SK Hynix is diverting massive amounts of wafer capacity away from traditional DDR5 and mobile memory. As we enter 2026, the memory industry is closely watching early production yields of HBM4 devices from Micron, Samsung, and SK Hynix—and 2026 is likely to be another landmark year for these memory chipmakers. ##### See also: JEDEC finalizes HBM4 standard HBM Innovation Outpaces Standards Development SK hynix Maintains Memory Leadership with First HBM4 ### 相关股票 - [Micron Tech (MU.US)](https://longbridge.com/zh-CN/quote/MU.US.md) ## 相关资讯与研究 - [What TurboQuant Actually Means for AI Memory Stocks](https://longbridge.com/zh-CN/news/281528405.md) - [Micron's stock is seeing its biggest gain in a year, en route to a record market-cap boost](https://longbridge.com/zh-CN/news/281405397.md) - [The AI Revolution and The 90s Internet Boom](https://longbridge.com/zh-CN/news/281005956.md) - [11:41 ETUna auditoría de CES 2026 revela un crecimiento del cuatro por ciento en la participación](https://longbridge.com/zh-CN/news/281214252.md) - [TCS Rewires Enterprise Tech With AI](https://longbridge.com/zh-CN/news/280993412.md)