--- title: "Samsung HBM plan: HBM4 dominates this year's shipments, HBM5 substrate upgraded to 2 nanometers" type: "News" locale: "en" url: "https://longbridge.com/en/news/279559315.md" description: "Samsung's HBM ambition fully ignited: HBM4's shipment proportion exceeds half this year, with overall capacity increasing more than three times compared to last year. The HBM5 substrate process will leap from 4 nanometers to 2 nanometers. Meanwhile, Samsung has been manufacturing NVIDIA's ecosystem Groq 3 inference chips in Pyeongtaek, transforming from a memory supplier to a full-stack partner for AI accelerators" datetime: "2026-03-18T08:40:00.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/279559315.md) - [en](https://longbridge.com/en/news/279559315.md) - [zh-HK](https://longbridge.com/zh-HK/news/279559315.md) --- > Supported Languages: [简体中文](https://longbridge.com/zh-CN/news/279559315.md) | [繁體中文](https://longbridge.com/zh-HK/news/279559315.md) # Samsung HBM plan: HBM4 dominates this year's shipments, HBM5 substrate upgraded to 2 nanometers Samsung Electronics is accelerating the layout of the next generation of high bandwidth memory. **As HBM4 officially enters mass production this year, Samsung has set its sights on the next generation product—planning to upgrade the HBM5 substrate process from 4nm to 2nm, using 1d DRAM as the core stacked memory for HBM5E. Meanwhile, HBM4 will account for more than half of Samsung's total HBM shipments this year, with overall HBM capacity increasing more than threefold compared to last year.** According to ETNews and Yonhap News Agency, Hwang Sang-jun, Vice President and Head of Memory Development at Samsung Electronics, disclosed the above plans at the NVIDIA GTC conference. He stated that the base die of HBM5 will utilize Samsung's 2nm process technology, achieving a generational upgrade from the 4nm process used in HBM4 and HBM4E, to meet the higher memory performance requirements of next-generation AI workloads. Regarding capacity targets, Hwang Sang-jun mentioned that Samsung plans for HBM4 to account for over 50% of total HBM shipments this year, while overall HBM production is expected to increase more than threefold compared to last year. This statement demonstrates Samsung's determination to expand in the AI storage market, which will have a direct impact on the high-end DRAM supply landscape and the downstream AI accelerator supply chain. In addition to the storage roadmap, Hwang Sang-jun also revealed that the inference chip Groq 3 is being produced at Samsung's Pyeongtaek campus, with mass production targets set for the end of the third quarter to the beginning of the fourth quarter this year, and order volumes have exceeded expectations. Thus, Samsung is further extending its role from a mere memory supplier to a full-stack partner for AI accelerators. ## HBM5 Substrate Process: Upgrading from 4nm to 2nm According to ETNews, Hwang Sang-jun clearly stated at NVIDIA GTC that the substrate of HBM5 will utilize Samsung's 2nm process technology, marking a significant upgrade from the 4nm process used in HBM4 and HBM4E. The enhancement of substrate technology typically helps improve memory bandwidth and energy efficiency performance. Hwang Sang-jun pointed out that while adopting cutting-edge processes may lead to increased costs, introducing advanced technology is inevitable to achieve the target performance of HBM. This statement clarifies Samsung's technical path of driving performance leaps through process upgrades in the high-end AI storage field. At the HBM5E level, according to ETNews, Hwang Sang-jun stated that this product will use 1d DRAM as the core stacked memory, representing another upgrade compared to the 1c DRAM used in HBM4 and HBM4E. The 1d DRAM for HBM5E is currently still in the internal R&D stage at Samsung and has not yet been commercialized. However, according to ETNews citing informed sources, Samsung has achieved strong performance and testing yield with this technology, indicating positive signals for advancing towards mass production. ## HBM4 Dominates Shipments This Year, Capacity Increased Over Threefold Compared to Last Year According to Yonhap News Agency, Hwang Sang-jun stated that Samsung's goal this year is for HBM4 to account for over 50% of total HBM shipments, while overall HBM production is expected to increase more than threefold compared to last year HBM4 officially entered mass production this year. Samsung plans to significantly expand its overall HBM capacity while promoting large-scale production to match the rising demand for high-bandwidth memory in the AI chip market. If the aforementioned capacity expansion plan is realized, it will have a substantial impact on the supply landscape of the high-end DRAM market. ## Groq 3 Foundry: Samsung Expands Its Role in the NVIDIA Ecosystem Beyond its storage business, Samsung is further expanding its position in the AI accelerator supply chain by manufacturing the Groq 3 inference chip. According to Yonhap News Agency, Hwang Sang-jun stated that NVIDIA CEO Jensen Huang has publicly acknowledged Samsung's contributions to Groq 3, which is being produced at Samsung's Pyeongtaek campus, with mass production targets set for the end of the third quarter to the beginning of the fourth quarter this year, and current order volumes have exceeded expectations. According to Yonhap News Agency, the die area of the Groq 3 chip exceeds 700 square millimeters, with only about 64 chips able to be cut from a single wafer, far below the usual 400 to 600 chips. Approximately 70% to 80% of the chip's area is composed of SRAM, allowing for rapid inference calculations on-chip without relying on external HBM. Hwang Sang-jun also revealed that Groq was already a customer of Samsung's foundry services before signing a licensing agreement with NVIDIA. According to SEDaily, Samsung's foundry of the Groq 3 LPU chip is widely seen as an important milestone in its role as a core partner in the next-generation AI accelerator full-stack platform. After Samsung's foundry division entered NVIDIA's supply chain, Samsung's role has expanded from merely supplying memory to the field of LPU manufacturing, further deepening its collaboration within the NVIDIA ecosystem ### Related Stocks - [Samsung Electronics Co Ltd Sponsored GDR Pfd (SSNGY.US)](https://longbridge.com/en/quote/SSNGY.US.md) - [XL2CSOPSMSN (07747.HK)](https://longbridge.com/en/quote/07747.HK.md) ## Related News & Research - [Samsung Elec and AMD sign MoU on AI memory, explore foundry partnership](https://longbridge.com/en/news/279553677.md) - [Samsung Elec showcases Nvidia's new inference chip made using 4 nanometer process](https://longbridge.com/en/news/279332138.md) - [Micron Technology (MU) Is Up 13.5% After New AI-Focused DRAM and HBM Partnership With Applied Materials](https://longbridge.com/en/news/279414086.md) - [Alset AI Converts Debt to Equity to Bolster Balance Sheet](https://longbridge.com/en/news/279093528.md) - [Micron Is the Best-Performing Artificial Intelligence (AI) Stock of the Past Year -- Up 318%. Can It Keep Going in 2026?](https://longbridge.com/en/news/279127711.md)