---
title: "Not just GPUs! NVIDIA GTC 2026 launches new LPU and CPU products, comprehensively laying out every aspect of AI data centers"
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/279364705.md"
description: "NVIDIA launched several new chips at the GTC conference, including the Nvidia Groq 3 Language Processing Unit (LPU) and the Vera Central Processing Unit (CPU). The five large server racks released are designed to meet the diverse needs of AI data centers. The Groq 3 chip focuses on AI inference, marking NVIDIA's positioning in the dedicated inference chip sector. The new LPX platform integrates 128 Groq 3 LPUs, enhancing throughput by 35 times per megawatt and creating a 10-fold revenue potential"
datetime: "2026-03-17T04:06:07.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/279364705.md)
  - [en](https://longbridge.com/en/news/279364705.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/279364705.md)
---

# Not just GPUs! NVIDIA GTC 2026 launches new LPU and CPU products, comprehensively laying out every aspect of AI data centers

According to Zhitong Finance APP, on Monday Eastern Time, NVIDIA (NVDA.US) officially kicked off the GTC conference in San Jose, California, launching multiple new chips and platforms, from the next-generation Nvidia Groq 3 Language Processing Unit (LPU) to the Vera Central Processing Unit (CPU) cabinet designed to compete with Intel (INTC.US) and AMD (AMD.US).

It is reported that NVIDIA has launched a total of five large server cabinets, each targeting different scenario needs within AI data centers.

The most significant release among them is the Nvidia Groq 3 chip. In December last year, NVIDIA acquired the technology licensing related to Groq for $20 billion and brought its founder Jonathan Ross, president Sunny Madra, and core team under its wing.

The Groq processor specializes in AI inference—the core process of running AI models. When users input commands and receive responses in ChatGPT, Claude, or Gemini, it is the inference technology that is at work behind the scenes.

Unlike NVIDIA's general-purpose GPUs, which can both train and run models, the launch of Groq 3 marks the company's formal possession of a dedicated inference chip to meet the urgent demand for the AI market's transition from model training to model application.

Ian Buck, NVIDIA's Vice President of Large Scale and High-Performance Computing, stated that while GPUs support larger memory capacities, the Groq 3's LPU memory has faster access speeds. By combining the performance advantages of both, the new Groq 3 LPX platform has emerged—this server cabinet integrates 128 independent Groq 3 LPUs, and when working in conjunction with the Vera Rubin NVL72 rack, it can enhance throughput by 35 times per megawatt, creating a tenfold revenue potential.

"The LPX architecture optimized for trillion-parameter models and million-token contexts perfectly complements Vera Rubin, maximizing efficiency between power consumption, memory, and computing power. This breakthrough in throughput per watt and token performance will give rise to ultra-high-end trillion-parameter inference services, opening new growth spaces for all AI service providers," NVIDIA emphasized in an official statement.

The launch of the LPX cabinet strongly addresses market concerns that NVIDIA might lose its advantage under the impact of emerging inference chip startups. Meanwhile, the independently deployed Vera CPU rack is also noteworthy—this cluster system, which uses 256 liquid-cooled Vera chips, marks NVIDIA's first deconstruction of the Vera CPU from the "Vera Rubin super chip" (which includes 1 Vera CPU + 2 Rubin GPUs).

With the rise of intelligent agent AI, the strategic value of CPUs is becoming increasingly prominent. When AI agents need to perform tasks such as browsing the web or extracting table information, CPU performance directly determines execution efficiency. In scenarios like data mining and personalized recommendations, where context analysis is required for GPUs, CPUs also play an irreplaceable role "Vera is the ultimate CPU tailored for intelligent AI workloads," Buck introduced. "We have redefined CPU architecture—with NVIDIA's Olympus core designed specifically for AI execution, enabling faster responses under extreme conditions, perfectly suited for all reinforcement learning scenarios."

This is not NVIDIA's first foray into the CPU field. Last month, an agreement was reached with Meta (META.US) to deploy the largest-ever cluster of the previous generation Grace CPUs. The independent release of Vera marks NVIDIA's formal establishment of a "GPU + CPU" dual-drive strategy, targeting the data center market dominated by Intel and AMD.

In addition to the aforementioned products, NVIDIA also showcased the Bluefield-4 STX storage cabinet system (which achieves performance leaps compared to traditional solutions) and the Spectrum-6 SPX network cabinet.

As demand for AI platforms continues to grow, NVIDIA's new product line is expected to further boost data center business revenue. In fiscal year 2026, its data center revenue reached $193.5 billion, a significant increase from $116.2 billion in fiscal year 2025. Among the $650 billion in AI capital expenditures planned by giants like Amazon (AMZN.US), Google (GOOGL.US), Meta, and Microsoft (MSFT.US) this year, NVIDIA will undoubtedly capture a substantial share

### Related Stocks

- [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [NVDY.US](https://longbridge.com/en/quote/NVDY.US.md)
- [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md)
- [NVDQ.US](https://longbridge.com/en/quote/NVDQ.US.md)
- [NVDX.US](https://longbridge.com/en/quote/NVDX.US.md)
- [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md)
- [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md)
- [XLK.US](https://longbridge.com/en/quote/XLK.US.md)
- [IGV.US](https://longbridge.com/en/quote/IGV.US.md)
- [XSW.US](https://longbridge.com/en/quote/XSW.US.md)
- [07788.HK](https://longbridge.com/en/quote/07788.HK.md)
- [07388.HK](https://longbridge.com/en/quote/07388.HK.md)
- [NVDD.US](https://longbridge.com/en/quote/NVDD.US.md)

## Related News & Research

- [The AI Stock Wall Street Can't Stop Talking About in 2026](https://longbridge.com/en/news/282407351.md)
- [Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI](https://longbridge.com/en/news/282774224.md)
- [Citron Raises Amazon Price Target to $300, Amazon May Challenge Nvidia’s AI Chip Market Position](https://longbridge.com/en/news/282299180.md)
- [Move Over Nvidia: Citron Sees 28% Upside In Amazon's Hidden Trillion-Dollar Chip Empire](https://longbridge.com/en/news/282302318.md)
- [Siemens Accelerates AI Chip Verification to Trillion‑Cycle Scale with NVIDIA Technology](https://longbridge.com/en/news/282222056.md)