--- title: "Nvidia focuses on intelligent agents! The open-source model Nemotron 3 Super has 120 billion parameters and a fivefold increase in throughput" type: "News" locale: "en" url: "https://longbridge.com/en/news/278760713.md" description: "Nemotron 3 Super activates only 12 billion active parameters during inference, natively supporting a context window of 1 million tokens; the performance leap comes from three architectural innovations: hybrid Mamba-Transformer backbone network, latent mixture of experts (latent MoE), and multi-token prediction (MTP). This model runs on the Blackwell platform with NVFP4 precision, achieving inference speeds up to four times that of Hopper platform FP8, with no loss in accuracy. Perplexity has become the first partner to access this model for executing agent tasks" datetime: "2026-03-11T16:02:40.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/278760713.md) - [en](https://longbridge.com/en/news/278760713.md) - [zh-HK](https://longbridge.com/zh-HK/news/278760713.md) --- > Supported Languages: [简体中文](https://longbridge.com/zh-CN/news/278760713.md) | [繁體中文](https://longbridge.com/zh-HK/news/278760713.md) # Nvidia focuses on intelligent agents! The open-source model Nemotron 3 Super has 120 billion parameters and a fivefold increase in throughput Nvidia is making strides in the competition for autonomous intelligent agent infrastructure, marking a strategic shift for the chip giant from being a hardware supplier to deeply extending into the model layer in the artificial intelligence (AI) race. On Wednesday, November 11th, Eastern Time, Nvidia announced the launch of the next-generation open-source large language model, Nemotron 3 Super, designed specifically for enterprise-level multi-agent systems. With a new mixture of experts (MoE) architecture, it boosts inference throughput to more than five times that of the previous generation model. The total parameter count of this model reaches 120 billion, activating only 12 billion parameters during inference, and natively supports a context window of 1 million tokens. Nvidia stated that Nemotron 3 Super has topped the Artificial Analysis rankings in terms of efficiency and openness, leading in accuracy among models of the same scale, and drives Nvidia's AI-Q research agents to rank first in both the DeepResearch Bench and DeepResearch Bench II leaderboards. Nvidia disclosed the first batch of partners for Nemotron 3 Super. AI search company Perplexity has become the first partner to access this model for executing agent tasks, providing users with multi-agent orchestration services in search and computer products. Enterprise software giants such as Palantir, Siemens, Cadence, Dassault Systèmes, and Amdocs have also announced plans to deploy this model for workflow automation in telecommunications, cybersecurity, semiconductor design, and manufacturing. The Nemotron 3 Super model is now available to developers through Nvidia's build.nvidia.com, Hugging Face, and OpenRouter channels. ## Two Major Bottlenecks Give Rise to New Architecture Nvidia pointed out in a blog that enterprises face two core constraints when transitioning from chatbots to multi-agent applications. The first is "context explosion": multi-agent workflows require the complete historical record (including tool outputs and intermediate reasoning steps) to be retransmitted with each interaction, resulting in a token count that can be up to 15 times that of standard conversations. As tasks extend, this massive context not only increases costs but can also lead to "goal drift"—agents gradually deviating from their original objectives. The second is "thinking tax": complex agents must reason at each step, and if each sub-task calls upon a large model, multi-agent applications will struggle to be implemented due to high costs and slow responses. Nemotron 3 Super directly addresses the context explosion issue with its 1 million token native context window, ensuring agents maintain state coherence in ultra-long tasks and preventing goal drift. The mixed architecture design specifically alleviates the thinking tax ## Triple Architecture Innovation Supports Fivefold Acceleration Nvidia's blog reveals that the performance leap of the Nemotron 3 Super comes from three core innovations at the architectural level. - Hybrid Mamba-Transformer Backbone Network: The model interleaves the deployment of Mamba-2 layers and Transformer attention layers. The Mamba layer handles most sequence tasks, providing a fourfold improvement in memory and computational efficiency with linear time complexity, making a million-token context window practically feasible; the Transformer layer is inserted at critical depths to ensure precise associative recall capability. - Latent Mixture of Experts (MoE): Before routing decisions, token embeddings are compressed into a low-rank latent space, with expert computations completed in this smaller dimension before being projected back to the full dimension. Nvidia states that this design allows the model to activate four times the number of experts at the same inference cost, achieving finer-grained specialized routing—such as activating different experts for Python syntax and SQL logic. - Multi-Token Prediction (MTP): The model synchronously predicts multiple future tokens in a single forward pass, rather than generating them token by token. Nvidia claims that this design enhances the model's internalization of long-range logical dependencies during the training phase and incorporates speculative decoding capabilities during the inference phase, achieving up to three times speed improvement for structured generation tasks like code and tool invocation, without the need for additional draft models. On Nvidia's Blackwell platform, this model runs at NVFP4 precision, achieving up to four times the inference speed compared to the FP8 on Nvidia's Hopper platform, with no loss in accuracy, according to Nvidia. ## Open Weights Overlay Multi-Layer Ecological Layout Unlike the current mainstream cutting-edge models that generally adopt an API-only access method, Nvidia has chosen to open the weights, datasets, and training schemes of the Nemotron 3 Super under a permissive licensing agreement, allowing developers to freely deploy and customize it on workstations, data centers, or the cloud. Nvidia has also publicly released the complete training and evaluation scheme, covering the entire process from pre-training to alignment, and has published over 10 trillion tokens of pre-training and post-training datasets, 21 reinforcement learning training environments, and evaluation schemes. During the pre-training phase, the model was trained on 25 trillion tokens at NVFP4 native precision, learning accuracy under the constraints of four-bit floating-point operations from the first gradient update, rather than through post-quantization. At the ecological level, Nvidia has partnered with major cloud service providers and hardware manufacturers such as Google Cloud Vertex AI, Oracle Cloud Infrastructure, Dell Technologies, and HPE. Access to Amazon AWS Bedrock and Microsoft Azure is also in preparation. Software development agent companies like CodeRabbit, Factory, and Greptile, as well as life sciences institutions Edison Scientific and Lila Sciences, have also announced plans to integrate this model into their agent workflows ## "Super+Nano" Combination Deployment Nvidia also elaborated on the collaborative deployment logic of the Nemotron 3 series in its blog. The Nano version of the Nemotron 3 model, launched last December, is suitable for handling targeted single-step tasks within agent workflows, while the Nemotron 3 Super is designed for complex multi-step tasks that require deep planning and reasoning. Taking the software development scenario as an example, Nvidia suggests: simple merge requests can be handled by Nano, while complex coding tasks that involve a deep understanding of the codebase should be undertaken by Super, and expert-level tasks can further call upon third-party proprietary models. This layered architecture aims to help enterprises seek an optimal balance between cost and capability. In specific application scenarios, Nvidia's blog cites that software development agents can load the entire codebase into context at once, achieving end-to-end code generation and debugging; in financial analysis scenarios, thousands of pages of reports can be loaded into memory, eliminating repetitive reasoning across long dialogues; in cybersecurity, autonomous security orchestration scenarios can benefit from high-precision tool calls, avoiding execution errors in high-risk environments. ## Hardware Moat's Model Layer Extension The rationale behind Nvidia's open model strategy is based on a clear business logic. Previously, Nvidia primarily accumulated its dominant position in the AI field by selling GPUs to model providers like OpenAI and Google. Now, if Nemotron becomes the mainstream foundational model for enterprise intelligent agents, the GPU infrastructure required for large-scale operation of this model will still rely on Nvidia—consolidating hardware layer demand while promoting openness at the model layer. Currently, the Nemotron 3 Super has been packaged and delivered through Nvidia's NIM microservices, supporting flexible deployment from local to cloud. Whether performance data can be validated under production-level workloads, and how enterprise clients make trade-offs between open flexibility and competitors' proprietary model capabilities, will be key variables in assessing the effectiveness of this strategy ### Related Stocks - [State Street® SPDR® S&P® Smcndctr ETF (XSD.US)](https://longbridge.com/en/quote/XSD.US.md) - [China Southern CSI Semiconductor Industry Custom ETF (159325.CN)](https://longbridge.com/en/quote/159325.CN.md) - [iShares Semiconductor ETF (SOXX.US)](https://longbridge.com/en/quote/SOXX.US.md) - [Guotai CES Semiconductor Chip Industry ETF (512760.CN)](https://longbridge.com/en/quote/512760.CN.md) - [YieldMax NVDA Option Income Strategy ETF (NVDY.US)](https://longbridge.com/en/quote/NVDY.US.md) - [VanEck Vectors Semiconductor UCITS ETF Accum A USD (SMH.UK)](https://longbridge.com/en/quote/SMH.UK.md) - [Direxion Daily NVDA Bull 2X Shares (NVDU.US)](https://longbridge.com/en/quote/NVDU.US.md) - [Invesco Semiconductors ETF (PSI.US)](https://longbridge.com/en/quote/PSI.US.md) - [GraniteShares 2x Long NVDA Daily ETF (NVDL.US)](https://longbridge.com/en/quote/NVDL.US.md) - [Direxion Daily Semicondct Bull 3X ETF (SOXL.US)](https://longbridge.com/en/quote/SOXL.US.md) ## Related News & Research - [Ayar Labs and Wiwynn Partner to Bring Co-Packaged Optics to Rack-Scale Ai Systems](https://longbridge.com/en/news/278760884.md) - [Is Nvidia Ditching Micron for Samsung and SK Hynix on Vera Rubin?](https://longbridge.com/en/news/278340004.md) - [Ahead of GTC 2026, Nvidia Launches Open-Source AI Platform 'NemoClaw'](https://longbridge.com/en/news/278491674.md) - [Better Artificial Intelligence (AI) Stock to Buy in March: Nvidia vs. Taiwan Semiconductor Manufacturing Co.](https://longbridge.com/en/news/278624430.md) - [Nvidia’s DLSS 4.5 with 6x Frame Generation is rolling out at the end of March](https://longbridge.com/en/news/278583046.md)