--- title: "Micron: 12-layer HBM3E begins mass production ahead of schedule" description: "After the announcement, SK Hynix's stock price surged by over 8%. Analysts believe that with NVIDIA's Blackwell Ultra and lightweight model B200A confirmed to introduce HBM3E 12H, it is crucial for SK" type: "news" locale: "en" url: "https://longbridge.com/en/news/215248734.md" published_at: "2024-09-26T02:51:32.000Z" --- # Micron: 12-layer HBM3E begins mass production ahead of schedule > After the announcement, SK Hynix's stock price surged by over 8%. Analysts believe that with NVIDIA's Blackwell Ultra and lightweight model B200A confirmed to introduce HBM3E 12H, it is crucial for SK Hynix, Samsung, and Micron to secure a leading position in this product field. The landscape of the HBM market in the second half of the year will depend on who first provides HBM3E 12H to NVIDIA Boosted by AI demand, SK Hynix takes the lead in mass producing 12-layer HBM3E. On Thursday, September 26th, SK Hynix announced that the company has started mass producing 12H (12-layer stack) HBM3E chips, **achieving the largest 36GB capacity in existing HBM products; the company will provide this product to customers within the year.** Following the announcement, SK Hynix's stock price continued to rise, surging over 8% at the time of writing. ## HBM3E 12H becomes the main battlefield for suppliers in the second half of the year HBM (High Bandwidth Memory) is a key component of GPUs, helping to process large amounts of data generated by complex applications. Chip vertical stacking technology can save space and reduce power consumption. Currently, there are only three major manufacturers of HBM—SK Hynix, Micron Technology, and Samsung Electronics. Among them, Samsung Electronics first introduced HBM3E 12H in February this year. **Considering NVIDIA's decision to incorporate HBM3E 12H into Blackwell Ultra and the lightweight model B200A, this product has become a key battlefield in the AI semiconductor field in the second half of the year.** Some believe that the second half of this year is crucial for determining the future landscape of the HBM market. **Whether SK Hynix can maintain its leading position or Samsung Electronics can reverse its decline will depend on who first provides HBM3E 12H to NVIDIA.** A industry insider commented: > "Even though the price of HBM3E 12H may slightly decrease due to intensified competition, the second half of the year is clearly the time to determine the landscape of the HBM market." > > "The key is how much each company's yield rate can be improved." Media reports revealed that SK Hynix has been a major supplier of NVIDIA's HBM chips and provided HBM3E to an unnamed customer at the end of March. ## Despite strong demand, Morgan Stanley warns of oversupply risks According to reports, SK Hynix's HBM3E 12H stacks 12 layers of 3GB DRAM chips, with the same thickness as the previous 8-layer product but a 50% increase in capacity. To achieve this goal, the company made each DRAM chip 40% thinner than before and used Through Silicon Via (TSV) technology for vertical stacking. The announcement also stated that the memory operation speed of HBM3E 12H has been increased to 9.6Gbps, which is currently the highest memory speed available in the industry. Running the Llama 3 70B large language model on a GPU with four HBM3E chips can achieve the ability to read 35 times 70 billion overall parameters per secondAccording to previous estimates by the media, compared to 8-layer stacking, **HBM3E 12H has an average 34% increase in AI training speed, and the number of inference service users can also increase by more than 11.5 times.** In addition, the company will use its core technology advanced MR-MUF process to improve the heat dissipation performance of HBM3E 12H by 10% compared to the previous generation, and enhance the control of warping issues to ensure stability and reliability. **The company stated that the early mass production of 12-layer HBM3E is to meet the growing demand of AI enterprises.** Justin Kim, President and Head of AI Infrastructure at SK Hynix, said: > "SK Hynix has once again broken through technological limitations, demonstrating our industry-leading position in the AI memory field." > > "We will continue to maintain our position as the world's largest artificial intelligence memory supplier, steadily preparing for the next generation of memory products to overcome the challenges of the AI era." However, earlier, Morgan Stanley issued a research report warning of an "impending memory winter", with potential oversupply of HBM and serious supply-demand imbalance in DRAM, among others. Morgan Stanley predicts that by 2025, the "good" supply in the current HBM supply chain (meaning high-quality and sufficient products) may gradually catch up with or even exceed the currently overestimated demand. Based on this, Morgan Stanley has issued a "double downgrade" on SK Hynix, lowering its rating to "underweight" while "halving" its target price from 260,000 Korean won directly to 120,000 Korean won, leading to a one-day stock price plunge of 11% ### Related Stocks - [NVDA.US - NVIDIA](https://longbridge.com/en/quote/NVDA.US.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | 机构 “最超配” 闪迪,“最低配” 英伟达 | 据摩根士丹利最新的统计:“机构对美国大型科技股的低配程度是 17 年来最大的” 相比 2025 年 Q4 的标普 500 指数权重,“$NVDA 仍然是机构低配程度最大的大型科技股,其次是苹果、微软、亚马逊和博通,而存储巨头闪迪则是 “最超 | [Link](https://longbridge.com/en/news/276289765.md) | | LPDDR 6 时代来临!AI 需求太猛,下一代 DRAM 将比预期更快进入市场 | LPDDR6 性能较前代提升 1.5 倍,最快下半年正式商用,英伟达、三星及高通等巨头正积极布局。目前多数 HPC 半导体设计企业考虑并行搭载 LPDDR5X 及 LPDDR6 IP,特别是在 4 纳米及以下先进制程芯片的设计中,需求出现得 | [Link](https://longbridge.com/en/news/276431575.md) | | 谷歌突然发布 Gemini 3.1 Pro:核心推理性能直接翻倍 | 谷歌发布了最新的大模型 Gemini 3.1 Pro,其推理性能较去年发布的 Gemini 3 Pro 翻倍。在 ARC-AGI-2 评测中,Gemini 3.1 Pro 得分 77.1%,显示出强大的推理能力。新模型支持多源数据综合和复杂 | [Link](https://longbridge.com/en/news/276396515.md) | | GRAIL|8-K:2025 财年 Q4 营收 43.6 百万美元超过预期 | | [Link](https://longbridge.com/en/news/276379877.md) | | 黄仁勋预告 “前所未见” 的芯片新品,下一代 Feynman 架构或成焦点 | 黄仁勋预告今年的 GTC 大会上发布” 世界从未见过” 的全新芯片产品,分析认为新品可能涉及 Rubin 系列衍生产品或更具革命性的 Feynman 架构芯片,市场预期 Feynman 架构将针对推理场景进行深度优化。 | [Link](https://longbridge.com/en/news/276310964.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.