--- title: "Perfect Loop: OpenAI buys cloud → Oracle buys cards → NVIDIA reinvests in OpenAI" description: "OpenAI has signed a letter of intent for cooperation with NVIDIA to jointly build a 10GW data center, with NVIDIA planning to gradually invest up to $10 billion in OpenAI. This collaboration will opti" type: "news" locale: "en" url: "https://longbridge.com/en/news/258423674.md" published_at: "2025-09-23T00:00:41.000Z" --- # Perfect Loop: OpenAI buys cloud → Oracle buys cards → NVIDIA reinvests in OpenAI > OpenAI has signed a letter of intent for cooperation with NVIDIA to jointly build a 10GW data center, with NVIDIA planning to gradually invest up to $10 billion in OpenAI. This collaboration will optimize OpenAI's models with NVIDIA's hardware roadmap and complement existing partnerships with Microsoft, Oracle, and others. The initial goal is to complete a 1GW deployment by the second half of 2026 Recently, NVIDIA, which has been experiencing a downturn in stock prices, made a significant move by signing a letter of intent for cooperation with OpenAI to jointly build a 10 GWh data center, yet another long-term options contract. The existing data centers are constrained by power and are still in an options state, and now another option is being added. This signing is indeed a bit puzzling. Broadcom's earnings report surged because it secured a $10 billion ASIC order from OpenAI, claiming it wants to reduce its dependence on NVIDIA. Just a week later, OpenAI signed an agreement with NVIDIA, deepening the ties?? ## Fact Check | What has been officially disclosed? **This is a "Letter of Intent (LOI)," not a final agreement.** Both parties announced plans to deploy **at least 10 GW of NVIDIA systems** (on the order of "millions of GPUs"), with an initial target of **1 GW** set for **the second half of 2026**, using the **NVIDIA "Vera Rubin"** platform. At the same time, **NVIDIA plans to invest up to $100 billion in OpenAI based on the progress of each GW deployment. Both parties will** collaboratively optimize OpenAI's model/infrastructure software with NVIDIA's hardware and software roadmap, stating that this cooperation **complements** their existing collaborations with **Microsoft, Oracle, SoftBank, and the "Stargate" network**. NVIDIA had previously disclosed the **2026** timeline for the Rubin platform; it has recently revealed that the Rubin GPU / Vera CPU **has completed tape-out**, and if verification goes smoothly, **mass production will begin in 2026**. **Key Points: Scale (≥10 GW) × Rhythm (initial phase 2026H2) × Capital (up to $100B, phased by GW) × Binding (co-optimization of roadmaps)**. How significant is this "10 GW"? - The NVL144 cabinet of the Rubin generation is approximately ~190 kW/cabinet, while the decoupled NVL144-CPX is about ~370 kW/cabinet; based on 10 GW IT, this is equivalent to about 53,000 cabinets (NVL144) or 27,000 cabinets (CPX). - Pricing reference (previously disclosed generation NVL72): a cabinet with 72 GPUs of the GB200 NVL72 "fully stacked" is approximately ~$3.9M (just the cabinet ~ $3.1M). This gives a nominal "anchor" of about ~$54k per GPU (fully stacked). If we use Jensen Huang's cited metric of "10 GW ≈ 4–5 million GPUs" as a rough estimate (just the scale, not the quote): - IT equipment CAPEX (using NVL72's "$54k/GPU full stack" as a proxy): ≈ $217B (4M GPUs) — $271B (5M GPUs); - Civil construction/park CAPEX (C&W: $9.7–15.0M/MW): 10 GW ≈ $97–150B; - Total CAPEX range: $314–421B (depending on the number of GPUs and construction site costs). **Energy consumption OPEX (annual)** (example: PUE 1.15–1.25, average IT utilization 0.75–0.95, electricity price $40–$100/MWh): - Annual electricity consumption ≈ 75–104 TWh; - Annual electricity cost ≈ $3.4–$10.4B. This aligns with the IEA's trend of "data center electricity consumption doubling rapidly," where electricity/PPA is the "true trigger" for all contracts. Is this collaboration an unexpected partnership between OpenAI and Oracle? (Does OpenAI still need to sign an agreement with NVIDIA?) OpenAI buys cloud → Oracle buys cards → NVIDIA reinvests in OpenAI, a perfect closed loop. This may not be "either/or," nor is it "repeated good news"; this is **a two-layer complementary collaboration**: **Oracle = Data center/cloud delivery layer**: responsible for "**powered data centers + operations**." On July 22, OpenAI and Oracle announced the addition of **4.5 GW** of Stargate data center capacity in the U.S.; subsequent media reports indicated an **approximately $300 billion** computing power procurement contract over **about 5 years** (starting in 2027), essentially a **pay-as-you-go/bulk** cloud computing power contract. **NVIDIA = Computing power hardware/platform layer**: responsible for "**GPU/complete systems + software stack + capacity assurance**." On September 22, OpenAI signed an **LOI** with NVIDIA for **≥10 GW**, specifying that NVIDIA **will gradually invest up to $100 billion in OpenAI for each GW**; the first phase of **1 GW** is planned to go live on the **Vera Rubin** platform in the **second half of 2026**. The two lines do not conflict; the official statement specifically noted: this NVIDIA-OpenAI collaboration is a **"complement"** to the cooperation network of **Microsoft, Oracle, SoftBank, and Stargate**. In other words, **NVIDIA provides and finances support for "system-level hardware" and optimizes the roadmap**, and these cabinets are likely to be **deployed in Oracle's (or other partners') data centers**, with Oracle responsible for power, site selection, installation, operation, and external delivery. **The same batch of capacity is disclosed externally by suppliers and cloud providers from their respective "layers," appearing as "one fish, multiple eats," but in reality, it is**upstream and downstream layered locking?\*\* Why is there still a need to bind separately at the NVIDIA level? Three practical drivers: - Capacity and priority: The Rubin generation (including NVL144 / CPX long context reasoning form) has a strong dependence on HBM/advanced packaging, and direct connection with suppliers can better secure allocation and timelines. - Technical co-design: OpenAI's models/infrastructure software are optimized in synergy with NVIDIA's hardware and software roadmap (NVLink, networking, memory bandwidth, inference decoupling, etc.), reducing waste from "mismatched inventory." - Capital structure: NVIDIA's "incremental investment by GW" essentially involves shared capital on the supplier side, helping to convert "large one-time CAPEX for hardware purchases" into "milestone-based funding loops," complementing Oracle's cloud OPEX contracts. Thus, Oracle is responsible for "cloud with power and space," NVIDIA is responsible for "systems with chips and cabinets," and OpenAI binds both ends, securing both "power and location" as well as "hardware and roadmap." This is not "eating the same fish repeatedly," but rather "upstream and downstream double insurance" of the same industrial chain, aimed at reducing the "supply and progress risks" of large-scale expansion from 2026 to 2028. OpenAI's "options puzzle" is gradually taking shape: OpenAI is signing a "conditional option" on each of the three constraint lines, which do not negate each other but rather complement the chain— - Computing power chips/systems → NVIDIA: Signed a LOI for ≥10 GW, aiming for the first phase to launch the Vera Rubin platform in H2 2026; it specifies that NVIDIA will invest "up to $100B" in phases based on each GW's implementation, essentially integrating "supplier capacity + roadmap + funding." Note that this is a letter of intent (LOI), with many terms pending formal agreement. - Location/power/cloud delivery → Oracle: Announced a 4.5 GW capacity expansion for Stargate in July; $300B cloud contract framework (OPEX form). This line addresses "data centers with power" and on-demand delivery. - Cost/risk hedging → Broadcom (self-developed ASIC): In September, multiple media outlets reported on the OpenAI × Broadcom plan for custom chips to be produced in 2026 (with scale reports starting from $10B); it is more focused on inference cost/supply security and does not exclude continued use of NVIDIA for cutting-edge training. In other words: **NVIDIA addresses "performance and stack + supply priority,"** **Oracle addresses "power and data centers,"** **Broadcom addresses "long-term unit costs and dual sourcing."** They indeed all come with "conditions," thus appearing like "option contracts": either binding milestones (power/launch/mass production) or binding "convertible/retractable/on-demand triggered" capacity and funding frameworksCBRE and C&W's data also confirm: large orders have generally **been pre-leased/booked years in advance**, while **power access is a hard constraint**. ## "Broadcom breaks free from dependence" vs "Deepens binding with NVIDIA"? How can both be true at the same time? 1. Different time dimensions: NVIDIA Rubin (2026H2) has a clear platform/network/software stack with large-scale deliverability; Broadcom's self-developed (2026) still needs tape-out → verification → capacity ramp-up, which is more like a medium to long-term solution for reasoning cost curves. The two paths running in parallel do not conflict. 2. Workload segmentation: Cutting-edge training/multi-modal long context is highly coupled with NVLink/network/video memory bandwidth/software ecosystem, where NVIDIA still holds an advantage; large-scale reasoning and specific model forms can lower TCO through ASIC. Rubin-CPX is focused on "decoupling reasoning (prefill/decoding distribution)" for cabinet-level productization. 3. Supply chain game: Three parties sign simultaneously, locking in three major constraints—silicon/packaging/HBM (NVIDIA roadmap and upstream allocation), power/site (Oracle and PPA), cost/dual sourcing (Broadcom/TSMC). This is a typical case of "one fish eating multiple meals," but essentially it is a multi-link option overlay to reduce the uncertainty of landing from 2026 to 2028. It seems that everyone is signing "option contracts with trigger conditions." OpenAI uses NVIDIA (training/long context reasoning) × Oracle (power/delivery) × Broadcom (long-term TCO/dual sourcing) trio to press against the three hard thresholds of power, packaging/HBM, and funding closure; "10 GW" is an additional track on top of existing commitments, with scale and cycle pointing to a heavy reinvestment window from 2026 to 2028. However, the market's repeated speculation on "options" should be approached with more caution! Risk Warning and Disclaimer The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account individual users' specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investing based on this is at one's own risk ### Related Stocks - [OpenAI.NA - OpenAI](https://longbridge.com/en/quote/OpenAI.NA.md) - [NVDA.US - NVIDIA](https://longbridge.com/en/quote/NVDA.US.md) - [ORCL.US - Oracle](https://longbridge.com/en/quote/ORCL.US.md) - [NVDL.US - GraniteShares 2x Long NVDA Daily ETF](https://longbridge.com/en/quote/NVDL.US.md) - [07788.HK - XL2CSOPNVDA](https://longbridge.com/en/quote/07788.HK.md) - [07388.HK - XI2CSOPNVDA](https://longbridge.com/en/quote/07388.HK.md) - [NVDY.US - YieldMax NVDA Option Income Strategy ETF](https://longbridge.com/en/quote/NVDY.US.md) - [NVDD.US - Direxion Daily NVDA Bear 1X ETF](https://longbridge.com/en/quote/NVDD.US.md) - [NVDX.US - T-Rex 2X Long NVIDIA Daily Target ETF](https://longbridge.com/en/quote/NVDX.US.md) - [NVDQ.US - T-Rex 2X Inverse NVIDIA Daily Target ETF](https://longbridge.com/en/quote/NVDQ.US.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | Meta 加码英伟达:未来数年部署数百万颗芯片,首次采用 Grace CPU | 根据周二发布的声明,Meta 承诺将使用更多来自英伟达的 AI 处理器和网络设备。同时,Meta 还将首次在其独立计算机的核心部件采用英伟达的 Grace CPU。此次部署将涵盖基于英伟达当前 Blackwell 架构,以及即将推出的 Ve | [Link](https://longbridge.com/en/news/276177748.md) | | 机构 “最超配” 闪迪,“最低配” 英伟达 | 据摩根士丹利最新的统计:“机构对美国大型科技股的低配程度是 17 年来最大的” 相比 2025 年 Q4 的标普 500 指数权重,“$NVDA 仍然是机构低配程度最大的大型科技股,其次是苹果、微软、亚马逊和博通,而存储巨头闪迪则是 “最超 | [Link](https://longbridge.com/en/news/276289765.md) | | 期权热点|周二 ORCL 跌 3%,部分看跌期权大涨 69% | 美东时间 02 月 17 日,甲骨文期权总成交 278416 张,看涨期权占比 39%,看跌期权占比 60%。 | [Link](https://longbridge.com/en/news/276195867.md) | | Amigos For Kids 被选为以人为本 AI 基金的 OpenAI Ready 奖项的获得者 | Amigos For Kids 被选为 OpenAI Ready Award 的获奖者,该奖项通过以人为本的人工智能基金支持社区基础解决方案的非营利组织。这一认可恰逢该组织即将迎来其 35 周年纪念。资金将帮助扩展课后和暑期项目,整合人工智 | [Link](https://longbridge.com/en/news/276142228.md) | | 段永平试水 “AI 交易”:卖苹果加仓英伟达,“新入” CoreWeave、Credo 和 Tempus | CoreWeave 专门搭建高性能 GPU 集群,把算力租给 AI 公司和企业客户;如果说 GPU 是心脏,Credo 提供的高速互联芯片和光模块就是血管,跟 AI 服务器迭代高度绑定;Tempus 则致力于将 AI 应用于精准医疗,尤其是 | [Link](https://longbridge.com/en/news/276189117.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.