NVIDIA's Blackwell is progressing smoothly, and TSMC continues to benefit.

portai
I'm PortAI, I can summarize articles.

Recently, there have been reports that in August, Nvidia made modifications to the photomask for its new product Blackwell and conducted a new super hot run. It is rumored that the tape-out results may be available by mid-October. Currently, the tape-out process is progressing smoothly.

However, industry experts believe that a large number of production runs will still be needed for batch testing to obtain statistical data and determine whether the yield is acceptable. This may happen in the next phase. Currently, after the photomask improvements, the circuit and wiring design enhancements appear to be visually effective.

The previous issues with Blackwell may not have been limited to the photomask; the bridge die design on the top of the two chips also faced challenges. As a result, Nvidia has been working with the TSMC team to resolve and improve these issues to ensure smoother performance on the top of the two chips and reduce bandwidth problems.

Regarding warpage, consensus has been reached, and TC machines, likely from K&S, have been selected—approximately 18 units—for the CoWoS-L Fluxless process. It is reported that the TC Fluxless process has already been qualified, so it likely won't involve ASM.

Additionally, TSMC's advanced packaging CoWoS wafer output is expected to double by Q3~Q4 next year, with CoWoS-L alone potentially reaching 100,000 wafers per quarter.

Currently, CoWoS wafer output is around 300K/year, which may double to 600K~700K next year (the 600K figure does not yet include TSMC's acquisition of Innolux's Nanke Plant 4 for glass substrate advanced packaging).

This projection is based on current expectations, but TSMC is concerned that visibility only extends to the first half of 2025, around July, with uncertainty about the second half and the extent of capacity expansion.

Much depends on Nvidia and AMD. AMD remains relatively optimistic and is also modifying its architecture.

One thing is certain: after AI, CPO will be a major battleground. Both AMD and Nvidia are designing their own CPOs, as copper wire-supported currents are nearing their limits, necessitating a swift transition to CPO. Each company has its own CPO design and is collaborating with TSMC on this, such as TSMC's proposed SoW (System on Wafer) architecture using silicon photonics. Additionally, DIE will adopt MCM and will inevitably move toward chiplet.

AMD is currently optimistic, primarily due to its close tracking of customer inventory. They have been visiting top CSP clients to inquire about inventory levels. However, the responses have been limited, raising concerns about potential impacts on ROI, future wafer orders, EPS, and growth. With the U.S. economy seemingly slowing, hopes are for a soft landing. A hard landing could abruptly halt these plans, reducing investments. Thus, both AMD and Nvidia are closely monitoring CSP clients, proceeding cautiously.

In fact, some believe both Nvidia and AMD are worried about order cancellations.

Ultimately, whether AI is a genuine proposition will depend on whether CSP companies' EPS can be justified and whether their AI investments yield appropriate returns. For example, Microsoft has been hesitant to disclose financial returns from Copilot.

If these businesses become highly profitable, it should reflect in financial reports, promoting AI's profitability to spur the next wave. However, if companies cannot clearly calculate or are reluctant to disclose AI's impact on revenue and EPS, investor and shareholder sentiment may remain unclear, potentially leading to reduced spending due to EPS pressures.

Despite the uncertain future, Nvidia and AMD believe the first half of next year will be stable, and TSMC shares this view. However, the second half remains unclear. Another critical factor is the macroeconomic environment; if the economy weakens, companies may reconsider maintaining high capital expenditures.

Concerns about whether AI can generate significant revenue are not new to this year. In fact, doubts emerged shortly after AI's rise in 2023. However, TSMC remained resolute in expanding capacity, primarily due to clear customer orders.

Additionally, AI is a nascent field, starting from scratch in 2022. Current output has gradually scaled up to 600K/year, with further expansion expected. By 2025, 600K may not yet have all equipment in place, but by 2026, output could reach 800K+, based on 25Q4 levels multiplied by four (210K × 4). TSMC believes 800K may be the peak unless there is explicit customer demand, making further expansion unrealistic at this stage.

In memory chips, Micron has replaced equipment twice, improving yields.

They switched to Hanmi, acquiring about 10 units, and due to insufficient capacity, continue to purchase Shibaura equipment. The front-end wafer process has also changed suppliers, with SUSS likely out and TEL taking its place.

In the future, memory will play a key role in AI, not HBM. Research is focusing on new directions, such as mobile DRAM.

HBM is currently unpopular. The packaging architecture for mobile LPDDR and its front-end breakthroughs will no longer focus on HBM. Future advancements, whether in EUV front-end nanoscale processes or back-end DRAM architecture overhauls, will center on LPDDR for cache cores in next-gen memory.

This is because AI phones or PCs lacking HBM face memory wall issues even with AI chips.

HBM is confirmed to be the most power-hungry component in AI architecture, prompting discussions about replacing it with LPDDR.

Due to power consumption concerns, memory manufacturers are developing next-gen LPDDR, which is more energy-efficient but may lack bandwidth, requiring front-end nanotech breakthroughs. While LPDDR may not fully replace HBM, partial replacement could lead to HBM oversupply at some point—not necessarily next year, but when next-gen LPDDR emerges, marking an industry inflection point.

The new LPDDR may even use hybrid bonding, a cutting-edge topic under discussion. The industry currently disfavors HBM due to high power consumption and cooling challenges. Cooling methods include air, liquid, and immersion cooling. If immersion cooling cannot resolve heat dissipation, how will larger architectures cope? Thus, CPO and LPDDR are critical to AI's future.

The B-series tape-out is progressing, with RTO dies already under testing. Final results may arrive by mid-October, meaning Blackwell mass shipments could begin in December, with early batches possible in November.

Currently, tape-out results appear promising, but yield optimization is still needed.

TSMC's front-end yield for such large dies also requires optimization. Nvidia's CPO yield is notably lower than Apple's, at around 60~70%, compared to 80~90% for other mobile clients.

CoWoS-L yield is 95%, but UPH is low, affecting only speed and price, not quality. UPH impacts pricing and margins.

Currently, both Nvidia and AMD are working with TSMC on CPO. Many CPOs are designed by end manufacturers, making design the decisive factor, with TSMC handling production. AMD is gaining traction but may struggle to surpass Nvidia due to software and ecosystem gaps, as CUDA remains dominant.

Expert: AMD may partner with companies like Broadcom, which specialize in CPO and optical engines, but the real competition lies in their own designs. Most designs are developed in-house by AMD and Nvidia, not outsourced to ASIC vendors, who remain suppliers. These details are unlikely to be shared, as they involve system architecture coordination, similar to the aforementioned SoW (System on Wafer) concept.

Overall, Blackwell, representing the most advanced AI computing power, is steadily advancing under Nvidia and TSMC's efforts, heralding a new year of rapid AI development.$NVIDIA(NVDA.US)$Taiwan Semiconductor(TSM.US)

The copyright of this article belongs to the original author/organization.

The views expressed herein are solely those of the author and do not reflect the stance of the platform. The content is intended for investment reference purposes only and shall not be considered as investment advice. Please contact us if you have any questions or suggestions regarding the content services provided by the platform.