--- title: "Nvidia's latest AI chip will cost more than $30,000, CEO says" description: "Nvidia's CEO Jensen Huang announced that the company's next-generation AI chip, Blackwell, will be priced between $30,000 and $40,000 per unit. The chip is expected to be in high demand for training a" type: "news" locale: "en" url: "https://longbridge.com/en/news/200203195.md" published_at: "2024-03-19T16:52:09.000Z" --- # Nvidia's latest AI chip will cost more than $30,000, CEO says > Nvidia's CEO Jensen Huang announced that the company's next-generation AI chip, Blackwell, will be priced between $30,000 and $40,000 per unit. The chip is expected to be in high demand for training and deploying AI software. This pricing range is similar to its predecessor, the H100. Nvidia's AI chips have contributed to a significant increase in sales and are widely used by top AI companies and developers. The Blackwell AI accelerator will include different versions and is set to ship later this year. Nvidia's next-generation graphics processor for artificial intelligence, called Blackwell, will cost between $30,000 and $40,000 per unit, CEO Jensen Huang told CNBC's Jim Cramer. "This will cost $30 to $40 thousand dollars," Huang said, holding up the Blackwell chip. "We had to invent some new technology to make it possible," he continued, estimating that Nvidia spent about $10 billion in research and development costs. The price suggests that the chip, which is likely to be in hot demand for training and deploying AI software like ChatGPT, will be priced in a similar range to its predecessor, the H100, or the "Hopper" generation, which cost between $25,000 and $40,000 per chip, according to analyst estimates. The Hopper generation, introduced in 2022, represented a significant price increase for Nvidia's AI chips over the previous generation. Nvidia CEO Jensen Huang compares the size of the new "Blackwell" chip versus the current "Hopper" H100 chip at the company's developer conference, in San Jose, California. Nvidia Nvidia announces a new generation of AI chips about every two years. The latest, like Blackwell, are generally faster and more energy efficient, and Nvidia uses the publicity around a new generation to rake in orders for new GPUs. Blackwell combines two chips and is physically larger than the previous-generation. Nvidia's AI chips have driven a tripling of quarterly Nvidia sales since the AI boom kicked off in late 2022 when OpenAI's ChatGPT was announced. Most of the top AI companies and developers have been using Nvidia's H100 to train their AI models over the past year. For example, Meta is buying hundreds of thousands of Nvidia H100 GPUs, it said this year. Nvidia does not reveal the list price for its chips. They come in several different configurations, and the price an end consumer like Meta or Microsoft might pay depends on factors such as the volume of chips purchased, or whether the customer buys the chips from Nvidia directly through a complete system or through a vendor like Dell, HP, or Supermicro that builds AI servers. Some servers are built with as many as eight AI GPUs. On Monday, Nvidia announced at least three different versions of the Blackwell AI accelerator — a B100, a B200, and a GB200 that pairs two Blackwell GPUs with an Arm-based CPU. They have slightly different memory configurations and are expected to ship later this year. ### Related Stocks - [NVDA.US - NVIDIA](https://longbridge.com/en/quote/NVDA.US.md) - [NVD.DE - NVIDIA Corporation](https://longbridge.com/en/quote/NVD.DE.md) - [NVDL.US - GraniteShares 2x Long NVDA Daily ETF](https://longbridge.com/en/quote/NVDL.US.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | 黄仁勋预告 “前所未见” 的芯片新品,下一代 Feynman 架构或成焦点 | 黄仁勋预告今年的 GTC 大会上发布” 世界从未见过” 的全新芯片产品,分析认为新品可能涉及 Rubin 系列衍生产品或更具革命性的 Feynman 架构芯片,市场预期 Feynman 架构将针对推理场景进行深度优化。 | [Link](https://longbridge.com/en/news/276310964.md) | | 机构 “最超配” 闪迪,“最低配” 英伟达 | 据摩根士丹利最新的统计:“机构对美国大型科技股的低配程度是 17 年来最大的” 相比 2025 年 Q4 的标普 500 指数权重,“$NVDA 仍然是机构低配程度最大的大型科技股,其次是苹果、微软、亚马逊和博通,而存储巨头闪迪则是 “最超 | [Link](https://longbridge.com/en/news/276289765.md) | | 学习英伟达刺激芯片销售,AMD 为 “AI 云” 借款做担保 | AMD 为扩大市场份额祭出金融 “狠招”!为初创公司 Crusoe 的 3 亿美元购芯贷款提供担保,承诺在其无客户时 “兜底” 租用芯片。这一复刻英伟达 “租卡云” 路径的策略虽能短期推高销量,但也令 AMD 在 AI 需求放缓时面临更大的 | [Link](https://longbridge.com/en/news/276401504.md) | | 为 AI 交易 “背书”!OpenAI 正敲定新一轮融资:以 8300 亿美元估值募资高达 1000 亿美元 | OpenAI 正以 8300 亿美元估值推进新一轮融资,目标筹集 1000 亿美元。软银拟领投 300 亿美元,亚马逊和英伟达可能各投 500 亿及 300 亿美元,微软拟投数十亿美元。本轮融资是 OpenAI 自去年秋季公司制改革以来的首 | [Link](https://longbridge.com/en/news/276298180.md) | | LPDDR 6 时代来临!AI 需求太猛,下一代 DRAM 将比预期更快进入市场 | LPDDR6 性能较前代提升 1.5 倍,最快下半年正式商用,英伟达、三星及高通等巨头正积极布局。目前多数 HPC 半导体设计企业考虑并行搭载 LPDDR5X 及 LPDDR6 IP,特别是在 4 纳米及以下先进制程芯片的设计中,需求出现得 | [Link](https://longbridge.com/en/news/276431575.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.