--- title: "H200 Countdown for Increased Volume in the Chinese Market! CUDA Supports Strong Demand as NVIDIA's AI Empire Welcomes \"Incremental Benefits\"" type: "News" locale: "en" url: "https://longbridge.com/en/news/279506184.md" description: "NVIDIA CEO Jensen Huang announced at the GTC conference that the company will mass-produce H200 AI chips for the Chinese market, marking progress in its efforts to re-enter the Chinese AI computing market. Despite facing an additional 25% tariff, the supply chain for the H200 chips is being restarted, which has a positive impact on NVIDIA's fundamental expansion prospects. Jensen Huang also revealed that NVIDIA has obtained permission from the U.S. government to sell H200 chips to large customers in China and announced a new AI computing infrastructure plan" datetime: "2026-03-18T01:34:10.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/279506184.md) - [en](https://longbridge.com/en/news/279506184.md) - [zh-HK](https://longbridge.com/zh-HK/news/279506184.md) --- # H200 Countdown for Increased Volume in the Chinese Market! CUDA Supports Strong Demand as NVIDIA's AI Empire Welcomes "Incremental Benefits" According to Zhitong Finance APP, Jensen Huang, the CEO of "AI chip superpower" NVIDIA (NVDA.US), stated that the chip giant is ramping up the mass production of the H200 AI training/inference accelerator based on the Hopper architecture launched in March 2022 for its customers in the Chinese market. This also indicates that the American chip company is making positive progress in its efforts to return to the crucial AI computing infrastructure market in China. Undoubtedly, if the H200 can flow significantly into the Chinese market despite the 25% additional costs/U.S. government tariff, it would be a substantial incremental benefit for NVIDIA's fundamental expansion prospects, especially considering that NVIDIA's official quarterly performance guidance and the "super AI blueprint" of at least one trillion dollars by 2027 presented at the GTC conference on Monday did not take into account the revenue prospects from the Chinese market. After delivering a significant speech at the NVIDIA GTC conference and announcing the next-generation AI computing infrastructure—the Vera Rubin architecture AI computing system to global investors, Huang stated at a press conference on Tuesday local time that NVIDIA has received permission from the U.S. government to sell H200 AI chips to "many large customers in the Chinese market" and is currently in the process of "restarting our mass manufacturing." He emphasized that this outlook is markedly different from just a few weeks ago. "Our H200 supply chain is being restarted," Huang said during an event at NVIDIA's annual GTC conference in San Jose, California. The CEO of the chip company had unveiled a series of new products the day before during the GTC conference opening speech and provided investors with updates on the financial fundamentals. In recent years, NVIDIA has been working hard to restore its AI chip sales in the Chinese market. Due to long-standing U.S. government restrictions on chip exports to China, this once vast market that NVIDIA relied on has effectively been almost entirely closed off for such AI computing infrastructure products. **H200 Under Pressure from U.S. Government's 25% Tariff** However, since the beginning of this year, the Trump administration has begun allowing NVIDIA and its strongest competitor AMD (AMD.US) to sell weaker versions of their AI chips to the Chinese market, but this still requires formal permission from the U.S. government and faces the 25% tariff imposed by the U.S. government. The U.S. government allows NVIDIA to export the H200 to China under specific conditions, with the 25% fee/tariff as a "trade-off." This arrangement is essentially a policy compromise that allows for exports while also generating revenue. In contrast, higher-end AI chip products such as the Blackwell series architecture and AMD Instinct MI450 series are still considered more sensitive technologies at the U.S. policy level and are not currently within the export license scope. This means they are not allowed to be exported at all, and thus do not fall under such tariff policies It is important to note that the semiconductor tariff policy targeting NVIDIA and AMD excludes chips used for domestic data centers, consumer devices, and industrial purposes in the United States, meaning these tariffs will not apply to H200/MI325X chips that are directly deployed in the U.S. Currently, NVIDIA has not included any revenue outlook from data center types in the Chinese market in its financial forecasts. The data center business unit, which is NVIDIA's most core business segment, provides the H100/H200 and Blackwell/Blackwell Ultra architecture AI GPUs that offer incredibly powerful AI computing infrastructure for data centers worldwide. In a performance conference call last month, the company stated that it had only received a preliminary license from the U.S. government to ship a small number of H200 AI chips to the Chinese market. Although the comprehensive performance of the H200 is far inferior to NVIDIA's current Blackwell/Blackwell Ultra architecture AI chips used for training and running large AI models, it remains popular in the sanctioned Chinese market due to its strong AI inference capabilities and the CUDA ecosystem that has swept the global AI developer community, along with its ease of deployment. China previously accounted for a quarter of NVIDIA's total revenue but now represents only a small portion. Despite the strong global demand for NVIDIA's AI chips, this Asian country is undoubtedly still the largest single semiconductor market in the world, making its long-term fundamental prosperity crucial for NVIDIA. NVIDIA received verbal permission from former U.S. President Donald Trump last December to sell H200 chips to some Chinese customers, but the chip company has yet to confirm any H200 revenue from the Chinese market from such permissions. Washington's manufacturers and tariff rule-makers have also set several additional obstacles that slow down the formal approval process, making the possibility of fully restoring sales without sanctions unlikely. With Jensen Huang's latest statement that H200 AI chips are currently in the process of "restarting our mass production," NVIDIA may soon confirm H200 revenue data from the Chinese market. Reports have previously indicated that H200 AI chips shipped to the Chinese market need to undergo additional routine inspections by the U.S. and are subject to a hefty 25% tariff. U.S. government officials are also considering limiting the number of H200 AI chips each Chinese customer can purchase to 75,000 chips, with a total shipment volume of up to 1 million processors. The demand for H200 AI chips in the Chinese market is likely very strong; the core limitation on transactions is not demand but rather U.S. government policies and approvals. Recent reports have indicated that Chinese tech companies have placed actual orders for over 2 million H200 AI chips, while NVIDIA's inventory at that time was only about 700,000 H200 chips. **Chinese Market - A Significant Incremental Benefit for NVIDIA** On Tuesday, NVIDIA's stock price closed down 0.7% at $181.93 during the U.S. trading session, bringing the stock's year-to-date decline to 2.5%, underperforming the S&P 500 index. From a fundamental expectation perspective, if the H200 AI chip can indeed flow into the Chinese market at a relatively large scale, it would be a substantial incremental benefit for NVIDIA, as China once accounted for about a quarter of NVIDIA's revenue, but now only represents a small portion. Additionally, NVIDIA's strong earnings guidance for this quarter provided in February did not include any revenue outlook from Chinese data centers, and the company's recent outlook for revenue from Chinese data centers remains zero. This means that as soon as H200 shipments begin to normalize, even if not fully opened up, it will create incremental upward adjustment space for NVIDIA's current valuation model and market growth expectations. In terms of underlying comprehensive performance, the H200 is already significantly behind one or even two generations compared to the current Blackwell, especially with Jensen Huang just announcing that Vera Rubin will enter mass production by the end of the year. The H200 belongs to the classic Hopper architecture, with specifications of 141GB HBM3e, 4.8TB/s bandwidth, and approximately 4 PFLOPS FP8; NVIDIA has publicly demonstrated that the GB200 NVL72 can achieve a 15-fold performance/revenue opportunity advantage over the Hopper H200 in certain inference scenarios. Furthermore, the official statement for Vera Rubin indicates a 10-fold performance improvement per watt compared to Blackwell, and a 10-fold lower token cost. However, these factors do not seem to hinder the H200's alignment with the current market demand in China, which is affected by U.S. sanctions. The H200 offers nearly a 6-fold performance improvement over NVIDIA's previously launched AI chip product for the Chinese market—the H20. In the global wave of AI inference, what enterprises truly need is a batch of mature chips that can be deployed immediately, capable of running large model inferences, with larger memory and higher bandwidth. NVIDIA's AI GPUs, which almost monopolize the AI training side, require more powerful AI computing clusters and rapid iteration capabilities across the entire computing system, while the AI inference side places greater emphasis on unit token costs, latency, and energy efficiency after the large-scale implementation of cutting-edge AI technologies. "The era of AI inference has arrived," Jensen Huang stated on Monday at the GTC conference. "And the demand for inference is continuously rising," he added. Therefore, the H200's 141GB HBM3e remains very attractive for long contexts, larger batch sizes, retrieval enhancement, and large-scale, high-efficiency batch deployment of AI inference clusters. Coupled with the strong demand driven by the CUDA ecosystem, it still represents "high-end usable computing power under constrained conditions" for the Chinese market. Meanwhile, CUDA, CUDA-X, ready-made model adaptation, development toolchains, and operational experience significantly reduce the migration and implementation costs for Chinese customers. For Wall Street institutional funds, this is not a grand narrative of "NVIDIA turning around through the Chinese market," but rather an additional piece of potentially severely underestimated upward demand space in the Chinese market, beyond the already strong global AI computing infrastructure mainline NVIDIA CEO Jensen Huang showcased NVIDIA's "unprecedented AI computing revenue super blueprint" in the AI infrastructure sector at the GTC conference in the early hours of March 17, Beijing time. He informed global investors that, driven by the strong demand for Blackwell architecture GPUs and the upcoming mass production of the Vera Rubin architecture AI computing system, the future revenue scale in the artificial intelligence chip sector could reach at least $1 trillion by 2027, far exceeding the $500 billion AI computing infrastructure blueprint projected at the last GTC conference for 2026. As model scale, inference links, and multimodal/agentic AI workloads drive exponential growth in computing power consumption, tech giants' capital expenditures are increasingly focused on AI computing infrastructure. Global investors continue to anchor the "AI bull market narrative" around NVIDIA, Google TPU clusters, and AMD's new product iterations and AI computing cluster delivery expectations, making it one of the most certain investment narratives in the global stock market. This also implies that investment themes closely related to AI training/inference, such as power supply, liquid cooling systems, and optical interconnect supply chains, will continue to rank among the hottest investment sectors in the stock market, even as geopolitical uncertainties in the Middle East persist for AI computing leaders like NVIDIA, AMD, Broadcom, TSMC, and Micron. According to Wall Street giants Morgan Stanley, Citigroup, Loop Capital, and Wedbush, the global AI infrastructure investment wave centered on AI computing hardware is far from over and is only at the beginning. Driven by the unprecedented "AI inference computing demand storm," this round of global AI infrastructure investment, expected to last until 2030, could reach as high as $3 trillion to $4 trillion ### Related Stocks - [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md) - [SMH.US](https://longbridge.com/en/quote/SMH.US.md) - [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md) - [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md) - [NVDY.US](https://longbridge.com/en/quote/NVDY.US.md) - [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md) - [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md) - [XLK.US](https://longbridge.com/en/quote/XLK.US.md) - [XSW.US](https://longbridge.com/en/quote/XSW.US.md) - [IGV.US](https://longbridge.com/en/quote/IGV.US.md) - [NVDX.US](https://longbridge.com/en/quote/NVDX.US.md) - [07788.HK](https://longbridge.com/en/quote/07788.HK.md) - [07388.HK](https://longbridge.com/en/quote/07388.HK.md) - [NVDD.US](https://longbridge.com/en/quote/NVDD.US.md) - [NVDQ.US](https://longbridge.com/en/quote/NVDQ.US.md) ## Related News & Research - [The AI Stock Wall Street Can't Stop Talking About in 2026](https://longbridge.com/en/news/282407351.md) - [Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI](https://longbridge.com/en/news/282774224.md) - [ASML investors bet on 'picks and shovels' of AI revolution](https://longbridge.com/en/news/282682545.md) - [Siemens Accelerates AI Chip Verification to Trillion‑Cycle Scale with NVIDIA Technology](https://longbridge.com/en/news/282222056.md) - [Wall Street Financial Group Inc. Acquires 2,961 Shares of NVIDIA Corporation $NVDA](https://longbridge.com/en/news/282537372.md)