
Perfect Loop: OpenAI buys cloud → Oracle buys cards → NVIDIA reinvests in OpenAI

OpenAI has signed a letter of intent for cooperation with NVIDIA to jointly build a 10GW data center, with NVIDIA planning to gradually invest up to $10 billion in OpenAI. This collaboration will optimize OpenAI's models with NVIDIA's hardware roadmap and complement existing partnerships with Microsoft, Oracle, and others. The initial goal is to complete a 1GW deployment by the second half of 2026
Recently, NVIDIA, which has been experiencing a downturn in stock prices, made a significant move by signing a letter of intent for cooperation with OpenAI to jointly build a 10 GWh data center, yet another long-term options contract. The existing data centers are constrained by power and are still in an options state, and now another option is being added.
This signing is indeed a bit puzzling. Broadcom's earnings report surged because it secured a $10 billion ASIC order from OpenAI, claiming it wants to reduce its dependence on NVIDIA. Just a week later, OpenAI signed an agreement with NVIDIA, deepening the ties??

Fact Check | What has been officially disclosed?
This is a "Letter of Intent (LOI)," not a final agreement. Both parties announced plans to deploy at least 10 GW of NVIDIA systems (on the order of "millions of GPUs"), with an initial target of 1 GW set for the second half of 2026, using the NVIDIA "Vera Rubin" platform. At the same time, NVIDIA plans to invest up to $100 billion in OpenAI based on the progress of each GW deployment. Both parties will collaboratively optimize OpenAI's model/infrastructure software with NVIDIA's hardware and software roadmap, stating that this cooperation complements their existing collaborations with Microsoft, Oracle, SoftBank, and the "Stargate" network. NVIDIA had previously disclosed the 2026 timeline for the Rubin platform; it has recently revealed that the Rubin GPU / Vera CPU has completed tape-out, and if verification goes smoothly, mass production will begin in 2026.
Key Points: Scale (≥10 GW) × Rhythm (initial phase 2026H2) × Capital (up to $100B, phased by GW) × Binding (co-optimization of roadmaps).
How significant is this "10 GW"?
- The NVL144 cabinet of the Rubin generation is approximately ~190 kW/cabinet, while the decoupled NVL144-CPX is about ~370 kW/cabinet; based on 10 GW IT, this is equivalent to about 53,000 cabinets (NVL144) or 27,000 cabinets (CPX).
- Pricing reference (previously disclosed generation NVL72): a cabinet with 72 GPUs of the GB200 NVL72 "fully stacked" is approximately ~$3.9M (just the cabinet ~ $3.1M). This gives a nominal "anchor" of about ~$54k per GPU (fully stacked).
If we use Jensen Huang's cited metric of "10 GW ≈ 4–5 million GPUs" as a rough estimate (just the scale, not the quote):
- IT equipment CAPEX (using NVL72's "$54k/GPU full stack" as a proxy): ≈ $217B (4M GPUs) — $271B (5M GPUs);
- Civil construction/park CAPEX (C&W: $9.7–15.0M/MW): 10 GW ≈ $97–150B;
- Total CAPEX range: $314–421B (depending on the number of GPUs and construction site costs).
Energy consumption OPEX (annual) (example: PUE 1.15–1.25, average IT utilization 0.75–0.95, electricity price $40–$100/MWh):
- Annual electricity consumption ≈ 75–104 TWh;
- Annual electricity cost ≈ $3.4–$10.4B. This aligns with the IEA's trend of "data center electricity consumption doubling rapidly," where electricity/PPA is the "true trigger" for all contracts.
Is this collaboration an unexpected partnership between OpenAI and Oracle? (Does OpenAI still need to sign an agreement with NVIDIA?)
OpenAI buys cloud → Oracle buys cards → NVIDIA reinvests in OpenAI, a perfect closed loop.
This may not be "either/or," nor is it "repeated good news"; this is a two-layer complementary collaboration:
Oracle = Data center/cloud delivery layer: responsible for "powered data centers + operations." On July 22, OpenAI and Oracle announced the addition of 4.5 GW of Stargate data center capacity in the U.S.; subsequent media reports indicated an approximately $300 billion computing power procurement contract over about 5 years (starting in 2027), essentially a pay-as-you-go/bulk cloud computing power contract.
NVIDIA = Computing power hardware/platform layer: responsible for "GPU/complete systems + software stack + capacity assurance." On September 22, OpenAI signed an LOI with NVIDIA for ≥10 GW, specifying that NVIDIA will gradually invest up to $100 billion in OpenAI for each GW; the first phase of 1 GW is planned to go live on the Vera Rubin platform in the second half of 2026.
The two lines do not conflict; the official statement specifically noted: this NVIDIA-OpenAI collaboration is a "complement" to the cooperation network of Microsoft, Oracle, SoftBank, and Stargate. In other words, NVIDIA provides and finances support for "system-level hardware" and optimizes the roadmap, and these cabinets are likely to be deployed in Oracle's (or other partners') data centers, with Oracle responsible for power, site selection, installation, operation, and external delivery. The same batch of capacity is disclosed externally by suppliers and cloud providers from their respective "layers," appearing as "one fish, multiple eats," but in reality, it isupstream and downstream layered locking?**
Why is there still a need to bind separately at the NVIDIA level? Three practical drivers:
- Capacity and priority: The Rubin generation (including NVL144 / CPX long context reasoning form) has a strong dependence on HBM/advanced packaging, and direct connection with suppliers can better secure allocation and timelines.
- Technical co-design: OpenAI's models/infrastructure software are optimized in synergy with NVIDIA's hardware and software roadmap (NVLink, networking, memory bandwidth, inference decoupling, etc.), reducing waste from "mismatched inventory."
- Capital structure: NVIDIA's "incremental investment by GW" essentially involves shared capital on the supplier side, helping to convert "large one-time CAPEX for hardware purchases" into "milestone-based funding loops," complementing Oracle's cloud OPEX contracts.
Thus, Oracle is responsible for "cloud with power and space," NVIDIA is responsible for "systems with chips and cabinets," and OpenAI binds both ends, securing both "power and location" as well as "hardware and roadmap." This is not "eating the same fish repeatedly," but rather "upstream and downstream double insurance" of the same industrial chain, aimed at reducing the "supply and progress risks" of large-scale expansion from 2026 to 2028.
OpenAI's "options puzzle" is gradually taking shape: OpenAI is signing a "conditional option" on each of the three constraint lines, which do not negate each other but rather complement the chain—
- Computing power chips/systems → NVIDIA: Signed a LOI for ≥10 GW, aiming for the first phase to launch the Vera Rubin platform in H2 2026; it specifies that NVIDIA will invest "up to $100B" in phases based on each GW's implementation, essentially integrating "supplier capacity + roadmap + funding." Note that this is a letter of intent (LOI), with many terms pending formal agreement.
- Location/power/cloud delivery → Oracle: Announced a 4.5 GW capacity expansion for Stargate in July; $300B cloud contract framework (OPEX form). This line addresses "data centers with power" and on-demand delivery.
- Cost/risk hedging → Broadcom (self-developed ASIC): In September, multiple media outlets reported on the OpenAI × Broadcom plan for custom chips to be produced in 2026 (with scale reports starting from $10B); it is more focused on inference cost/supply security and does not exclude continued use of NVIDIA for cutting-edge training.
In other words: NVIDIA addresses "performance and stack + supply priority," Oracle addresses "power and data centers," Broadcom addresses "long-term unit costs and dual sourcing." They indeed all come with "conditions," thus appearing like "option contracts": either binding milestones (power/launch/mass production) or binding "convertible/retractable/on-demand triggered" capacity and funding frameworksCBRE and C&W's data also confirm: large orders have generally been pre-leased/booked years in advance, while power access is a hard constraint.
"Broadcom breaks free from dependence" vs "Deepens binding with NVIDIA"? How can both be true at the same time?
- Different time dimensions: NVIDIA Rubin (2026H2) has a clear platform/network/software stack with large-scale deliverability; Broadcom's self-developed (2026) still needs tape-out → verification → capacity ramp-up, which is more like a medium to long-term solution for reasoning cost curves. The two paths running in parallel do not conflict.
- Workload segmentation: Cutting-edge training/multi-modal long context is highly coupled with NVLink/network/video memory bandwidth/software ecosystem, where NVIDIA still holds an advantage; large-scale reasoning and specific model forms can lower TCO through ASIC. Rubin-CPX is focused on "decoupling reasoning (prefill/decoding distribution)" for cabinet-level productization.
- Supply chain game: Three parties sign simultaneously, locking in three major constraints—silicon/packaging/HBM (NVIDIA roadmap and upstream allocation), power/site (Oracle and PPA), cost/dual sourcing (Broadcom/TSMC). This is a typical case of "one fish eating multiple meals," but essentially it is a multi-link option overlay to reduce the uncertainty of landing from 2026 to 2028. It seems that everyone is signing "option contracts with trigger conditions." OpenAI uses NVIDIA (training/long context reasoning) × Oracle (power/delivery) × Broadcom (long-term TCO/dual sourcing) trio to press against the three hard thresholds of power, packaging/HBM, and funding closure; "10 GW" is an additional track on top of existing commitments, with scale and cycle pointing to a heavy reinvestment window from 2026 to 2028.
However, the market's repeated speculation on "options" should be approached with more caution!
Risk Warning and Disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account individual users' specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investing based on this is at one's own risk

