---
title: "Sold-Out Capacity: Who Are the Eight Core Bottlenecks for AI's Future Development?"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/282690531.md"
description: "The race for AI infrastructure has entered a new phase, with core capacities such as TSMC CoWoS and HBM memory chips pre-locked by tech giants like NVIDIA, Apple, Microsoft, and Google until 2027 or 2028. A surge in AI computing demand has exceeded the expansion capacity of the semiconductor supply chain, leading to sold-out production, extended lead times of 6-12 months, and price increases of 10%-30%. Bargaining power within the supply chain is shifting from design to production, highlighting the structural scarcity of AI computing power"
datetime: "2026-04-14T12:22:06.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/282690531.md)
  - [en](https://longbridge.com/en/news/282690531.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/282690531.md)
---

# Sold-Out Capacity: Who Are the Eight Core Bottlenecks for AI's Future Development?

The race for AI infrastructure has entered a new phase where "physical constraints outweigh algorithmic innovation." Victory no longer depends on whose model architecture is superior or whose software stack is more complete, but rather on who can secure the scarcest few "resource locks" in the global supply chain in advance. TSMC CoWoS capacity, HBM memory chips, 3nm/2nm advanced process nodes, ABF substrates, high-end CCLs (Copper Clad Laminate), ultra-thin electronic cloth, power transformers, and gas turbines—the eight core capacity segments—are all pre-locked by tech giants like NVIDIA, Apple, Microsoft, and Google through 2027 and even into 2028.

Why are these capacities being snapped up in advance?

## I. What Happened? A Paradigm Shift in Industry

The race for AI infrastructure has entered a new phase where "physical constraints outweigh algorithmic innovation." Victory no longer depends on whose model architecture is superior or whose software stack is more complete, but rather on who can secure the scarcest few "resource locks" in the global supply chain in advance. TSMC CoWoS capacity, HBM memory chips, 3nm/2nm advanced process nodes, ABF substrates, high-end CCLs (Copper Clad Laminate), ultra-thin electronic cloth, power transformers, and gas turbines—the eight core capacity segments—are all pre-locked by tech giants like NVIDIA, Apple, Microsoft, and Google through 2027 and even into 2028.

Why are these capacities being snapped up in advance?

The root cause lies in the explosive growth of AI computing power demand far exceeding the physical expansion capabilities of the global semiconductor supply chain. From the demand side, in 2026, all major AI accelerators—NVIDIA Rubin, Google TPU v7/v8, AWS Trainium 3, AMD MI350X—are simultaneously migrating to the most advanced nodes. From the supply side, capacity expansion in areas such as advanced packaging, HBM, and ABF substrates is constrained by equipment lead times, material bottlenecks, and yield ramp-up challenges. Even at the most aggressive capital expenditure rates, it cannot keep pace with the exponential growth in demand. The result is that these "locked-in" segments show extreme short-term highlights: production is sold out, lead times have stretched to 6-12 months or even over 30 weeks, prices rose 10%-30% per quarter, and bargaining power in the supply chain has shifted completely from front-end design to back-end production.

The structural scarcity of AI computing capacity is not accidental but driven by three long-term factors. First is the physical limit of the supply chain: 2nm wafer foundry unit prices reach $30,000; the interposer area for advanced packaging is evolving from 2800mm² to larger sizes, yet yield remains an uncertain variable. Second is the strategic reconstruction of customer relationships: AI chips have transformed from "commodities" to "strategic resources," with cloud giants like Microsoft and Google personally flying to South Korea to lock in HBM capacity, even paying 30% deposits upfront to secure 3-5 year long-term agreements. Third is the concentrated explosion of technological iteration: upgrades from M8 to M9/M10 for CCL, the leap from HBM3E to HBM4, and the shift from traditional power delivery to vertical power architectures—all occurring within the same time window. Value distribution upstream in the industry chain is undergoing profound reshaping—ABF substrate prices continue to rise, CCL prices saw weekly gains of up to 10%-20%, and upstream raw materials account for over 60% of PCB cost structures, with every price increase cascading down the line.

Considering capital expenditure, technical barriers, and order visibility across three dimensions, we believe 2026-2028 will be the "year of finalization" for the AI supply chain landscape. Leading enterprises with large-scale capacity and first-mover technological advantages will gain sustained excess returns during this cycle of capacity scarcity, securing their victory.

This is a significant paradigm shift: the competitive dimension of the AI boom has moved entirely from "algorithm superiority" to "the grab for underlying physical capacity." Under the pressure of NVIDIA Blackwell Ultra and Rubin architecture evolution, global tech giants' procurement strategies have leaped from "order-on-demand" to "strategic pre-payment."

The essence of capacity raiding: This is a multi-dimensional blockade centered on "signal integrity (PCB/CCL)," "energy density (supercapacitors/power)," "logistics channels (shipping slots)," and "compute futures (NeoCloud)."

## II. Why Does It Matter? The Eight Core Capacities Emerge

We dissect these eight capacity bottlenecks layer by layer to outline the full picture of this supply chain crisis.

① TSMC CoWoS: The AI Chip Packaging Dilemma

CoWoS (Chip-on-Wafer-on-Substrate) is TSMC's advanced packaging technology and one of the most severe bottlenecks in current AI chip production. The essence of this technology lies in heterogeneous integration—connecting multiple small chips (computing dies, HBM, I/O dies, etc.) via a silicon interposer for ultra-high-density interconnection within a single package, thereby breaking through single-chip photomask size limits.

Capacity data is compelling. By the end of 2024, TSMC's CoWoS capacity was approximately 35,000 wafers per month; by the end of 2025, it grew to 80,000, an increase of over double; the target for the end of 2026 is 115,000 to 130,000 units; and by 2027, it is expected to reach 145,000 units. Overall, CoWoS monthly capacity in 2025 is estimated at 65,000-70,000 units, rising to 120,000-130,000 units in 2026. Despite accelerated capacity rollout, it remains insufficient to meet customer demand. Morgan Stanley expects TSMC's CoWoS monthly capacity to reach at least 120,000 to 130,000 units, while TSMC CEO Morris Chang explicitly stated: "Our CoWoS capacity is extremely tight and will remain sold out through 2025 and 2026."

Even at such an aggressive expansion rate, capacity still lags far behind demand. NVIDIA alone has locked over 60% of TSMC's total CoWoS capacity for 2025-2026 for its Blackwell and upcoming Rubin architectures, with production sold out through the end of 2026 and continued bookings extending into 2027. Due to CoWoS shortages, Google's TPU capacity has also failed to meet expectations. "Packaging has become the narrowest bottleneck for AI computing power"—the interposer area for CoWoS has expanded from an initial ~800mm² to over 2800mm² today, with technical complexity growing exponentially. TSMC is building new packaging facilities (AP7 and AP8) in Chiayi and Tainan, and planning CoWoS capacity in Arizona, but ramping up new capacity takes time, making the tight situation in 2026-2027 unlikely to ease.

② TSMC Wafer Capacity: The Battle for N3 and N2

If CoWoS is a packaging bottleneck, then TSMC's N3 (3nm) and N2 (2nm) processes represent the bottleneck in wafer fabrication. In 2026, AI-related demand already accounts for nearly 60% of N3 capacity, with the remaining 40% primarily used for smartphones and CPUs. All major AI accelerators are migrating to N3 simultaneously in 2026: NVIDIA's Rubin, Google's TPU v7/v8, AWS's Trainium 3, and AMD's MI350X. This concentrated migration creates unprecedented capacity pressure, with major HPC customers having booked N3 and N2 capacity through 2027. Broadcom executives confirmed that TSMC's advanced process capacity is booked through 2028.

More critically, TSMC has pushed effective utilization to the limit through optimizations such as process layer transfers. TSMC previously stated that demand for advanced node wafers is currently "approximately three times the company's available capacity." The situation in 2027 will be even more extreme: AI demand is expected to occupy 86% of N3 wafer output, almost completely squeezing out smartphone and CPU orders, forcing some smartphone product lines to switch to N2 early.

The outlook for N2 is equally bleak. TSMC began mass production of N2 in Q4 2025, with initial capacity at 90,000 to 100,000 wafers per month. N2 adopts the GAA (Gate-All-Around) nanosheet transistor architecture, offering 10-15% speed improvements or 25-30% power reduction compared to 3nm. However, Apple has already locked over 50% of early N2 capacity allocation for 2026-2027 for its A20/A20 Pro chips, while AMD, MediaTek, and Qualcomm flagship products will compete for the remaining capacity in 2026-2027. 2nm wafer foundry prices have reached $30,000, an entry fee not affordable by all players.

③ HBM High-Bandwidth Memory: Sold Out Through End of 2026

HBM (High-Bandwidth Memory) is the most performance-sensitive and supply-tight component in AI accelerators. The entire industry chain's capacity tension is most extreme at the HBM stage.

SK Hynix CFO explicitly stated: "We have sold out all our HBM supply for 2026."

Micron CEO similarly confirmed: "Our HBM capacity for 2026 is fully booked."

Samsung is reflecting supply-demand tension through price hikes—raising HBM prices by over ten to twenty percentage points in 2026 contracts.

The three major manufacturers—SK Hynix, Samsung, and Micron—have sold out their 2026 capacity, mostly locked via long-term contracts. SK Hynix has clearly stated: "It will be difficult to make meaningful adjustments to HBM and standard DRAM lines in 2026," with new orders queued until after Q1 2027. Samsung directly increased NAND flash supply prices by 100% in Q1 2026.

The structural reasons behind the HBM shortage are multifaceted. First, HBM requires more process steps than standard DRAM, and HBM3E validation cycles are longer. Second, the technological leap from HBM3E to HBM4—HBM4 uses a 2048-bit interface requiring 12-layer and 16-layer stacking, presenting thermal density and height control challenges far exceeding previous generations. Third, cloud giants are locking in multi-year allocations for HBM3E and next-gen HBM4—Microsoft and Google personally flew to South Korea to negotiate with SK Hynix, paying 30% deposits upfront to sign three-year long-term agreements with price floor clauses. SK Hynix is also using next-generation HBM products as leverage to extend existing partnerships by another two years.

④ ABF Substrates: T-Glass Shortage Ignites Supply Chain

ABF substrates are the core base material for packaging high-compute chips in AI servers. The root cause of current supply-demand tension lies in the severe shortage of upstream key materials—T-Glass fiberglass cloth.

T-Glass possesses low thermal expansion coefficients and low signal loss, making it a critical material for producing high-grade ABF and BT substrates. Over 80% of the global T-Glass market is supplied by Japan's Nitto Boseki and US-based PPG. When AI demand exploded in 2025, the supply system immediately became tense. Foreign investment surveys indicate a T-Glass supply-demand gap of 25%, with lead times extending from the original 8-10 weeks to over 30 weeks, leaving Taiwanese substrate factories with inventory of only about two months. Material prices surged by approximately 30% within just a few months.

Ibiden, a major supplier of ABF substrates for NVIDIA AI servers, has decided to accelerate capacity expansion due to increased AI substrate orders. The new factory is expected to utilize 25% capacity in Q4 2025 and reach 50% capacity before March 2026. However, Ibiden explicitly stated that new capacity may still fail to fully meet demand.

Foreign institutions have deeper outlooks on ABF substrate supply-demand dynamics. A US foreign investment report indicates that the supply-demand structure turned at the end of 2025, with tightening supply worsening monthly. Expected supply shortage ratios are projected to reach 10% in the second half of 2026, expanding to 21% in 2027, and further to 42% in 2028, similar to the shortage period in 2020. Prices increased by approximately 20-30% annually at that time, and price hikes are likely in the coming quarters.

⑤ High-End CCL Copper Clad Laminates: The Upgrade Wave from M7→M8→M9→M10

Copper Clad Laminate (CCL) is the core material for PCB manufacturing and the segment with the most significant price transmission in this wave of price hikes. The M-series numbering directly correlates with material technology generations; higher grades offer better material performance and lower losses. The rapid AI boom has driven the application of the entire M2-M8 series of high-speed CCLs. NVIDIA's Blackwell platform has upgraded CCL to M8, and the Rubin platform is expected to adopt M9/M10.

M9 material is a high-frequency, high-speed CCL developed by NVIDIA specifically for next-generation Rubin architecture AI servers. Its core composition includes special resins, quartz cloth (Q-cloth), and high-end copper foil (HVLP4/HVLP5). With the anticipated launch of NVIDIA's Rubin platform in 2026, the M9-level CCL market may see a surge in demand. M10 material utilizes hydrocarbon resin compounded with electronic-grade quartz cloth. NVIDIA has initiated M10 testing, targeting application in the mass-produced Rubin Ultra and Feynman platforms in 2027.

Price hike signals began transmitting from international giants. Following Japan's Resonac announcement of price increases of over 30% for copper foil substrates and adhesive films, Mitsubishi Gas Chemical also announced price hikes of 30% starting April 1st for the entire range of high-end PCB materials including CCL and copper foil resin sheets. Kingboard recently issued a price hike notice stating: "Recent surges in chemical product prices and tight supply have caused CCL costs to rise sharply, so we are raising prices for all board materials and PP (prepreg) by 10%."

Starting December 2025, mainstream manufacturers like Kingboard and Nan Ya New Materials frequently issued price adjustment letters, with CCL prices seeing weekly maximum increases of 10%-20%. Behind the price hikes is real supply-demand support—AI server PCB usage per unit has grown 3-5 times compared to traditional servers, with value increasing 8-12 times. In the PCB cost structure, upstream raw materials account for about 60%; fluctuations in CCL, copper foil, and prepreg prices directly determine industry chain profitability.

In the CCL cost structure, copper foil accounts for about 42%, resin 26%, and fiberglass cloth 20%. Currently, there is a severe supply gap for high-end fiberglass cloth (such as Gen 1, Gen 2, and Q-cloth), with consumer electronics manufacturers like Apple and Qualcomm competing with AI server suppliers for limited supplies.

⑥ Ultra-Thin Electronic Cloth: Structural Scarcity Squeezed by AI Capacity

Electronic cloth is the core reinforcing material for CCL. This shortage exhibits clear structural characteristics. AI computing power demand forces fiberglass manufacturers to tilt capacity toward high-end categories like LDK fiberglass cloth and quartz cloth, directly causing structural shortages in 1080, 2116, and standard 7628 cloths.

Starting from the second half of 2025, the industry launched a clear price hike cycle. In October and December 2025, and January and February 2026, ordinary electronic cloth experienced four price hikes, with 7628 thick cloth accumulating increases of 1 to 1.2 yuan/meter, while thin cloth saw even larger increases. Nitto Boseki raised fiberglass product prices by 20% comprehensively starting August 2025, with strong industry catch-up expectations. More noteworthy is that the supply of ultra-thin cloth (1080 model) is constrained by imported loom bottlenecks; domestic equipment cannot yet meet process precision requirements, making shortages highly likely to persist throughout the year.

⑦ Transformers and Vertical Power Delivery Modules: From Power Demand to Physical Limits

As GPU power consumption climbs from 700W to over 1400W, AI server power systems are undergoing a fundamental shift from "horizontal power delivery" to "vertical power delivery" (VPD). Traditional horizontal power delivery incurs network losses exceeding 100W when GPU current exceeds 850-1000A. The vertical power delivery architecture delivers power vertically through PCB layers directly to the processor, reducing total network resistance from 90-140μΩ in horizontal mode to 10-15μΩ.

Vertical power delivery module products from suppliers like Infineon continue to evolve on key performance metrics such as current density, with third-generation products reaching 24A/mm². More critically, the widespread adoption of vertical power delivery requires a complete redesign of server architecture and imposes higher demands on the capacity of supporting components like transformers, inductors, and capacitors.

⑧ Gas Turbines: The "Energy Infrastructure" for Compute Facilities

The scaled expansion of AI data centers is becoming a significant incremental driver for natural gas power generation demand. As core equipment for data center backup power and distributed energy, gas turbine capacity also faces tightness. This is directly related to the power and cooling needs of AI servers—NVIDIA GB200 NVL72 single rack power consumption has reached 1200W, and the power demand for whole-rack data centers is evolving from MW-level to GW-level.

## III. What to Watch Next? Industry Landscape and Technological Evolution

This round of AI supply chain competition shows a trend of polarization.

The "winner-takes-all" effect is most evident in the CoWoS segment—NVIDIA alone has locked over 60% of TSMC's advanced packaging capacity for 2026, leaving competitors AMD and Broadcom fighting fiercely for the remaining capacity. In the HBM sector, SK Hynix leads in market share thanks to its first-mover advantage in entering NVIDIA's supply chain. This "Matthew Effect" means that enterprises establishing long-term supply relationships ahead of this cycle will enjoy sustained competitive advantages for years to come.

The ultimate manifestation of capacity shortages is delayed AI server deliveries and rising compute costs. Datacenter-grade GPUs face lead times of 6-12 months; high-end GPUs like B200 are virtually "invisible" in the Chinese market; and mid-to-low-end GPUs like 5090 and 4090 have seen prices surge due to factors like raw material price hikes.

More importantly, downstream enterprises are already preparing for this prolonged supply tightness. Microsoft and SK Hynix are nearing completion of DDR5 long-term negotiations, with contract values totaling tens of trillions of Korean Won. Samsung Electronics is promoting advanced packaging bundled with DRAM and foundry services to select customers, pushing "capacity locking" to a new height of integrated foundry and storage.

We divide the industrial transformation driven by this round of AI capacity bottlenecks into four stages:

Stage 1 (2024-2025): Initial Mismatch Period. AI computing power demand begins to explode, but upstream capacity expansion lags, resulting in preliminary supply-demand mismatches. CoWoS and HBM experience supply tightness first. TSMC launches expansion plans, but distant water cannot quench immediate thirst.

Stage 2 (2025-2026): Accelerated Capacity Shortage Period. All major AI accelerators migrate to the most advanced nodes simultaneously, with all eight capacity segments under strain. Downstream customers begin locking capacity through long-term contracts and prepayments, fundamentally changing contract models. We are currently in the core acceleration phase of this stage.

Stage 3 (2026-2028): Capacity Release and Landscape Solidification Period. New capacities like TSMC's AP7/AP8 and SK Hynix's P&T7 gradually come online, but AI demand grows synchronously. Leading enterprises with first-mover advantages and scale further consolidate market positions, with bargaining power continuing to concentrate upstream.

Stage 4 (Post-2028): Supply Chain Maturity and Landscape Reconstruction Period. Global advanced packaging and HBM capacity form a new pattern of "Asian Manufacturing + Global Layout." Geographic diversification of the supply chain is initially complete, but high-end segments remain dominated by a few oligopolies.

Core Conclusions:

Conclusion 1: The essence of AI computing capacity bottlenecks is not cyclical supply-demand fluctuations but a structural mismatch between technological generational leaps and capacity construction rhythms. 2026-2028 will be the "year of finalization" for the AI supply chain landscape; enterprises that lock in capacity first will gain sustained competitive advantages for years to come.

Conclusion 2: Among the eight capacity bottlenecks, CoWoS and HBM are the "bottlenecks within bottlenecks." Although CoWoS capacity expansion is fast, it still cannot keep up with demand, with NVIDIA locking over 60% of capacity; HBM 2026 capacity is fully sold out, with Microsoft and Google paying 30% deposits upfront to secure 3-5 year long-term agreements. The supply-demand tension in these two segments far exceeds market expectations and will persist beyond 2027.

Conclusion 3: ABF substrates and high-end CCLs are the upstream material segments with the greatest elasticity in the capacity squeeze. T-Glass supply-demand gaps reach 25%, lead times stretch to over 30 weeks, and new capacity won't open until the end of 2026. US foreign investment agencies estimate ABF substrate supply shortage ratios will reach 10% in the second half of 2026, expanding to 42% by 2028. CCL prices have seen weekly maximum increases of 10%-20%, and the upgrade certainty for high-end M9/M10 materials is extremely strong.

Conclusion 4: Domestic substitution is approaching a historical window in high-end CCLs, ultra-thin electronic cloth, and packaging substrates. In the high-end CCL sector, domestic firms like Kingboard, Shengyi Technology, and Nan Ya New Materials are accelerating their catch-up efforts. In the ABF substrate sector, companies like Fastprint and Shenzen Circuit have achieved partial breakthroughs. Amidst the current "shortage wave," enterprises with high-end product capacity and customer certification advantages are expected to accelerate their integration into the supply chain.

Risk Warnings and Disclaimer

Markets involve risks; investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment objectives, financial situations, or needs of individual users. Users should consider whether any opinions, views, or conclusions herein align with their specific circumstances. Investments made based on this content are at the user's own risk.

### Related Stocks

- [AAPL.US](https://longbridge.com/en/quote/AAPL.US.md)
- [TSM.US](https://longbridge.com/en/quote/TSM.US.md)
- [MU.US](https://longbridge.com/en/quote/MU.US.md)
- [GOOG.US](https://longbridge.com/en/quote/GOOG.US.md)
- [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md)
- [GOOGL.US](https://longbridge.com/en/quote/GOOGL.US.md)
- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [MSFO.US](https://longbridge.com/en/quote/MSFO.US.md)
- [ARTY.US](https://longbridge.com/en/quote/ARTY.US.md)
- [TSMG.US](https://longbridge.com/en/quote/TSMG.US.md)
- [IDGT.US](https://longbridge.com/en/quote/IDGT.US.md)
- [AGIX.US](https://longbridge.com/en/quote/AGIX.US.md)
- [XSD.US](https://longbridge.com/en/quote/XSD.US.md)
- [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md)
- [NVDX.US](https://longbridge.com/en/quote/NVDX.US.md)
- [AAPU.US](https://longbridge.com/en/quote/AAPU.US.md)
- [TSMU.US](https://longbridge.com/en/quote/TSMU.US.md)
- [FTXL.US](https://longbridge.com/en/quote/FTXL.US.md)
- [NVDY.US](https://longbridge.com/en/quote/NVDY.US.md)
- [MSFL.US](https://longbridge.com/en/quote/MSFL.US.md)
- [GGLL.US](https://longbridge.com/en/quote/GGLL.US.md)
- [AAPX.US](https://longbridge.com/en/quote/AAPX.US.md)
- [MSFX.US](https://longbridge.com/en/quote/MSFX.US.md)
- [PSI.US](https://longbridge.com/en/quote/PSI.US.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [MSFU.US](https://longbridge.com/en/quote/MSFU.US.md)
- [DTCR.US](https://longbridge.com/en/quote/DTCR.US.md)
- [AIQ.US](https://longbridge.com/en/quote/AIQ.US.md)
- [TSMX.US](https://longbridge.com/en/quote/TSMX.US.md)
- [AAPB.US](https://longbridge.com/en/quote/AAPB.US.md)
- [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md)
- [SOXQ.US](https://longbridge.com/en/quote/SOXQ.US.md)
- [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md)

## Related News & Research

- [AI Powerhouse Firmus Rockets to $5.5 Billion Valuation with Nvidia (NVDA) Backing](https://longbridge.com/en/news/282047226.md)
- [Google brings its Gemini Personal Intelligence feature to India](https://longbridge.com/en/news/282713461.md)
- [Willow Lane Says Merger Target Boost Run Achieves Nvidia AI Cloud Certification](https://longbridge.com/en/news/282552982.md)
- [PREVIEW-TSMC likely to book fourth straight quarter of record profit on insatiable AI demand](https://longbridge.com/en/news/282478726.md)
- [Nvidia Chips Allegedly Brought Into China by AI Firm Sharetronic](https://longbridge.com/en/news/282379892.md)