--- title: "\"AI Bull Market Narrative\" Creates Huge Waves Again! Jensen Huang Unveils Trillion-Dollar AI Blueprint as NVIDIA Sets Sail Towards a $6 Trillion Market Value" type: "News" locale: "en" url: "https://longbridge.com/en/news/279369355.md" description: "NVIDIA CEO Jensen Huang showcased a grand blueprint for future AI computing infrastructure at the GTC conference, with revenue expected to reach $1 trillion by 2027, driving stock prices to new highs. Analysts are optimistic about NVIDIA's market value surpassing $6 trillion in the next 12 months, with the most optimistic expectation reaching $8.8 trillion. Global investors will continue to focus on NVIDIA and its competitors' performance in the AI computing field, making it one of the most certain investment narratives in the stock market" datetime: "2026-03-17T04:55:04.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/279369355.md) - [en](https://longbridge.com/en/news/279369355.md) - [zh-HK](https://longbridge.com/zh-HK/news/279369355.md) --- # "AI Bull Market Narrative" Creates Huge Waves Again! Jensen Huang Unveils Trillion-Dollar AI Blueprint as NVIDIA Sets Sail Towards a $6 Trillion Market Value According to Zhitong Finance APP, NVIDIA CEO Jensen Huang showcased NVIDIA's "unprecedented AI computing power revenue super blueprint" at the GTC conference in the early morning of March 17 Beijing time. He informed global investors that, driven by the strong demand for Blackwell architecture GPU computing power and the explosive demand for the soon-to-be-mass-produced Vera Rubin architecture AI computing power system, its future revenue scale in the artificial intelligence chip sector **could reach at least $1 trillion by 2027, far exceeding the $500 billion AI computing power infrastructure blueprint proposed at the last GTC conference for 2026.** Analysts from Goldman Sachs, Wedbush, and Morgan Stanley, who are optimistic about NVIDIA's stock price prospects, believe that with a revenue growth outlook stronger than expected, NVIDIA's market value is about to break the $5 trillion mark again after last October and is very likely to reach a historically high point much higher than the current level. For NVIDIA's stock price, it may soon set a new historical high and drive the global AI computing power industry chain into a new upward trajectory, **and the trillion-dollar super AI computing power blueprint put forward by NVIDIA will do its utmost to support the "AI bull market narrative," which is the main line of the capital market. According to the average target price of Wall Street analysts, this means that NVIDIA's market value will exceed $6 trillion within the next 12 months, with the most optimistic expectations on Wall Street reaching a total market value of $8.8 trillion.** ![1773720942(1).png](https://imageproxy.pbkrs.com/https://img.zhitongcaijing.com/image/20260317/1773720955908122.png?x-oss-process=image/auto-orient,1/interlace,1/resize,w_1440,h_1440/quality,q_95/format,jpg) As model scale, inference links, and multimodal/agentic AI workloads drive computing power consumption to expand exponentially, tech giants' capital expenditures are increasingly focused on AI computing power infrastructure. **Global investors are continuing to anchor the "AI bull market narrative" around NVIDIA, Google TPU clusters, and AMD's new product iterations and AI computing power cluster delivery expectations as one of the most certain investment narratives in the global stock market. This also means that investment themes closely related to AI training/inference, such as electricity, liquid cooling systems, and optical interconnect supply chains, will continue to rank among the hottest investment camps in the stock market, alongside AI computing power leaders like NVIDIA, AMD, Broadcom, TSMC, and Micron, even amid uncertainties in the geopolitical situation in the Middle East.** At the annual GTC developer conference held in San Jose, California, CEO Jensen Huang announced a new central processing unit (i.e., data center server-level CPU) and a set of LPU AI inference computing power infrastructure systems built on the exclusive AI inference architecture technology of Groq, a startup company. NVIDIA acquired the technology license from Groq for $17 billion last December These measures are part of Jensen Huang's efforts to consolidate the company's position in the so-called "inference computing" field. Inference computing refers to the massive computational process of answering query requests from global B-end and C-end users; in this area, NVIDIA's AI GPU computing power system is facing intensified competition from central processing units and custom AI ASIC processors developed by companies like Google (i.e., the AI ASIC technology route led by Google TPU). In recent years, NVIDIA chips have dominated the training of large AI models, which has been a focal point of market attention. **The AI training side, which is almost monopolized by NVIDIA's AI GPUs, requires a more powerful AI computing power cluster generality and rapid iteration capability of the entire computing power system, while the AI inference side places greater emphasis on unit token cost, latency, and energy efficiency after the large-scale implementation of cutting-edge AI technologies.** "The era of artificial intelligence inference has arrived," Jensen Huang stated at the GTC conference. "And the demand for inference is continuously rising," he added. Dressed in his signature black leather jacket, Jensen Huang delivered his speech in an ice hockey arena that can accommodate over 18,000 people. This four-day technology conference has become one of the largest platforms for showcasing AI technology globally. "I just want to remind everyone that this is a highly anticipated technology conference," he told the audience. ## The AI Inference Wave is Coming, NVIDIA's "AI Computing Power Blueprint" Soars to Trillions If Jensen Huang's speech at this GTC could be summarized in one sentence, **the core message is: NVIDIA is completely restructuring itself from a "company selling AI GPUs" to a "chip giant selling AI factories."** The official keynote opened with the token as the basic unit of modern AI, **Jensen Huang shifted the industry focus from "training" to "inference + agentic AI," and revised the revenue opportunities for AI infrastructure from the previous $500 billion to at least $1 trillion for 2025-2027.** This is not just a simple demand adjustment, but a message to the capital market: the future competition in computing power will no longer only look at peak training FLOPS, but at who can continuously produce tokens at the lowest cost, with the highest level of data throughput and the best latency. Surrounding this narrative of expanding AI computing power demand, Jensen Huang provided a very clear underlying business logic: data centers are no longer "storage centers," but "AI factories." Under a fixed power budget, **the most critical metrics are not single-chip peak performance, but "tokens per watt, cost per token, time to first production." This is why he repeatedly emphasizes "extreme codesign"—optimizing computation, networking, storage, software, power supply, and cooling as a whole. Official statements indicate that the Vera Rubin NVL72 can achieve up to 10 times the inference throughput per watt compared to the Blackwell platform, with only one-tenth of the single token cost, and the number of GPUs required for training large-scale MoE models can be reduced to a quarter of the original This is no longer "chip iteration," but a rewriting of the economics of AI infrastructure.** At the latest hardware level, the most significant change at this GTC is that NVIDIA has officially integrated CPU, GPU, LPU, DPU, SuperNIC, switching extreme chips, and storage architecture into a platform-level system. The officially defined Vera Rubin platform includes Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, Spectrum-6 Ethernet switch, and the latest integrated NVIDIA Groq 3 LPU; **among them, the Vera Rubin NVL72 rack consists of 72 Rubin GPUs + 36 Vera CPUs, while the Groq 3 LPX rack is specifically designed to complement low-latency inference. Jensen Huang innovatively splits AI inference into two segments: prefill is handled by Vera Rubin, and decode is managed by Groq AI chips. This means that NVIDIA's answer to the inference era is no longer "let the GPU do everything," but rather to separate high throughput and ultra-low latency processing through heterogeneous computing.** On the software and ecosystem front, Jensen Huang's stance in his speech is equally aggressive. Dynamo 1.0 is defined by NVIDIA as the inference operating system for the AI factory, officially claiming it can bring up to 7 times the inference performance improvement for Blackwell; in the direction of intelligent agents, NVIDIA has launched Agent Toolkit, OpenShell, and NemoClaw, elevating OpenClaw to a platform akin to "the operating system for personal AI," and providing strategy control, privacy routing, and security boundaries for enterprise implementation. **At the same time, NVIDIA has expanded the open large model family including Nemotron, Cosmos, Isaac GR00T, Alpaymayo, BioNeMo, Earth-2, and previewed the roadmap for the Feynman architecture: the next-generation platform will introduce Rosa CPU, LP40 LPU, BlueField-5, CX10, and Kyber, continuing to advance copper interconnect and co-packaged optics towards the next generation of AI factories.** Further extending out, **GTC 2026 is not just about data centers. NVIDIA has also brought "physical AI" and "spatial computing" to the main stage**: IGX Thor has entered the stage of general availability, targeting industrial, medical, robotic, and edge computing; the Open Physical AI Data Factory Blueprint is used to accelerate data generation, enhancement, and evaluation for robots, visual AI agents, and autonomous driving; while the Space-1 Vera Rubin Module extends the Vera Rubin architecture to orbital data centers, with officials claiming it can provide up to 25 times the AI computing power for space inference compared to H100 **This indicates that NVIDIA has expanded its "AI factory" from cloud data centers into a unified infrastructure paradigm that spans across cloud, edge, endpoint, vehicles, robotics, and even space.** The real theme of GTC 2026 is not a single new product launch as in the past, **but rather NVIDIA integrating GeForce, data center computing infrastructure, networking, storage, inference computing systems, agent platforms, robotics, and spatial computing into a unified narrative—“upgrading from a single GPU supplier to an AI infrastructure general contractor.” This is why the most noteworthy aspect of this conference is not the specifications of a particular AI chip, but rather how NVIDIA is locking in the token economics, inference monetization process, and infrastructure bargaining power for the coming years with system-level products.** ## Consolidation of AI computing infrastructure monopoly, NVIDIA's stock price aiming for historical highs? “Before this, investors generally had concerns about the sustainability of the massive AI infrastructure spending by tech giants, **but as Jensen Huang outlined a $1 trillion revenue opportunity by 2027, investors began to believe that the demand for NVIDIA's AI infrastructure still has long-term durability**,” said Emarketer analyst Jacob Bourne. “**As the entire AI industry transitions from early experimentation to large-scale deployment, NVIDIA continues to maintain its leading position in the AI computing market.”** When Jensen Huang raised NVIDIA's AI chip and infrastructure opportunity scale to at least $1 trillion at GTC, **the market no longer saw it as a chip company continuing to sell stronger GPUs, but as an infrastructure empire attempting to define the next generation of "AI factory" production functions: transitioning from the training era to the inference era, from single-chip competition to system-level dominance of entire racks, networks, and software stacks.** From Blackwell, Vera Rubin to Groq technology collaboration aimed at low-latency decoding, NVIDIA is rewriting token throughput, revenue per watt, and inference monetization capabilities into a new valuation language. At the GTC conference, Jensen Huang demonstrated that demand is still actively expanding with the $1 trillion opportunity scale, while also illustrating with a complete platform of CPUs, GPUs, LPUs, high-performance network components, software ecosystems, and agent toolchains that NVIDIA's competitive unit is no longer a single AU chip, but an entire AI factory. When Jensen Huang stated that "the inference inflection point has arrived," he was essentially announcing to the capital markets: AI capital expenditures are far from peaking, and true large-scale deployment is just beginning; **and when NVIDIA integrates CPUs, GPUs, LPUs, networks, agent software, and data center economics into the same narrative, it is not just raising a new product cycle, but steering towards a super giant ship aiming for a $5 trillion market value imagination space.** The average stock price compiled by TIPRANKS from Wall Street analysts shows that analysts are generally optimistic about NVIDIA's stock price reaching $273, which implies that in their view, **NVIDIA has an astonishing upside potential of 51% over the next 12 months, with the most optimistic target price reaching as high as $360.** The target price of $273 corresponds to approximately $6.6 trillion for NVIDIA. As of Monday's market close, NVIDIA's stock price was $183.220, with a market capitalization of about $4.45 trillion.\*\* ![1773720739(1).png](https://imageproxy.pbkrs.com/https://img.zhitongcaijing.com/image/20260317/1773720752942098.png?x-oss-process=image/auto-orient,1/interlace,1/resize,w_1440,h_1440/quality,q_95/format,jpg) Jensen Huang raised the revenue opportunity for AI chips/AI computing infrastructure to at least $1 trillion by 2027, significantly higher than the previous estimate of $500 billion by 2026 surrounding the Blackwell and Rubin architectures. **Wall Street financial giant Goldman Sachs stated after the GTC conference that the trillion-dollar revenue outlook presented at the latest GTC conference provides a longer-term demand endorsement for the market, alleviating investors' anxiety about "AI capital expenditures possibly peaking in 2026." In other words, Goldman Sachs' analyst team believes this presentation was not merely showcasing new products but was re-anchoring NVIDIA's order ceiling and performance sustainability for the next two to three years.** Goldman Sachs emphasized that NVIDIA not only released another incredibly powerful AI GPU but also officially commercialized inference in NVIDIA's exclusive way, fully upgrading NVIDIA's AI computing infrastructure into the core equipment for the next stage of the global AI arms race. As mentioned above, Jensen Huang broke down inference into prefill and decode: the former is handled by Vera Rubin, while the latter is taken over by Groq 3 LPX/LPU, indicating that NVIDIA is further expanding from a "training powerhouse" to a "total contractor for AI computing inference infrastructure." Goldman Sachs stressed that the official figures exceeded market expectations: Vera Rubin + LPX can achieve up to 35 times the inference throughput per megawatt and provide up to 10 times the revenue opportunity for trillion-parameter models. Goldman Sachs stated that NVIDIA is not only maintaining its position in the training market but is also presenting a stronger monetization framework and a more complete heterogeneous computing solution in the power-constrained, latency-sensitive inference era. The reason Goldman Sachs holds a more bullish stance is **mainly because this GTC simultaneously addressed two of the investors' most concerned issues: first, whether demand has peaked, and second, whether NVIDIA will be diluted by CPUs, self-developed ASICs, or other custom chips in the inference era.** Goldman Sachs indicated that the $1 trillion forward guidance far exceeds market expectations, confirming that the demand from cloud computing hyperscalers remains strong and durable. Based on an optimistic assessment of potential catalysts in the coming months, Goldman Sachs reiterated its "buy" rating on NVIDIA and maintained a 12-month target price of $250, emphasizing that the capital expenditure plans of super cloud service providers and the new models based on the Blackwell and Rubin architectures will continue to solidify the company's performance leadership ### Related Stocks - [NVDA.US](https://longbridge.com/en/quote/NVDA.US.md) - [SMH.US](https://longbridge.com/en/quote/SMH.US.md) - [NVDY.US](https://longbridge.com/en/quote/NVDY.US.md) - [NVDL.US](https://longbridge.com/en/quote/NVDL.US.md) - [NVDU.US](https://longbridge.com/en/quote/NVDU.US.md) - [NVDX.US](https://longbridge.com/en/quote/NVDX.US.md) - [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md) - [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md) - [07788.HK](https://longbridge.com/en/quote/07788.HK.md) - [07388.HK](https://longbridge.com/en/quote/07388.HK.md) - [NVDD.US](https://longbridge.com/en/quote/NVDD.US.md) - [NVDQ.US](https://longbridge.com/en/quote/NVDQ.US.md) ## Related News & Research - [The AI Stock Wall Street Can't Stop Talking About in 2026](https://longbridge.com/en/news/282407351.md) - [Wall Street Financial Group Inc. Acquires 2,961 Shares of NVIDIA Corporation $NVDA](https://longbridge.com/en/news/282537372.md) - [ASML investors bet on 'picks and shovels' of AI revolution](https://longbridge.com/en/news/282682545.md) - [Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI](https://longbridge.com/en/news/282774224.md) - [NVIDIA Corporation $NVDA is Fifth Third Wealth Advisors LLC's 3rd Largest Position](https://longbridge.com/en/news/282408776.md)