
Will the 250GW data center be completed by 2033? Is OpenAI's "ambition" realistic?

A typical nuclear power plant generates about 1 gigawatt of electricity. OpenAI's goal means that to support its own artificial intelligence, it would require the equivalent of the electricity generation of 250 nuclear power plants. Based on the current cost of building a 1 gigawatt nuclear power plant (approximately $50 billion), the cost of building 250 nuclear power plants would reach $12.5 trillion
OpenAI plans to build a 250GW data center by 2033. CEO Altman views this as a "brutal industrialization" path to general artificial intelligence, but still faces huge challenges such as power supply, trillion-dollar funding needs, and supply chain bottlenecks.
Last week, Wall Street Journal mentioned that the flagship data center site in Abilene, Texas has officially begun operations, as part of OpenAI and Oracle's $500 billion Stargate project. OpenAI CEO Altman showcased the preliminary results of this ambitious project to the media.
(OpenAI CEO Sam Altman at the Stargate data center in Abilene, Texas)
On this 800-acre construction site, 6,400 workers are working diligently, with the length of fiber optic cable laid enough to circle the Earth 16 times. Altman stated:
This massive construction site is just a small part of the future scale, and it is not even enough to meet the demands of ChatGPT.
According to information disclosed internally by OpenAI, the company expects to build data center capacity exceeding 2GW by the end of 2025 and plans to reach an astonishing scale of 250GW by 2033, which accounts for about a quarter of the current total installed power capacity in the United States (approximately 1200GW).
The core of Altman's strategy is "scalable computing." This does not refer to algorithm breakthroughs, but rather to driving artificial intelligence towards general artificial intelligence (AGI) and super artificial intelligence (ASI) through a "brutal industrialization" approach involving millions of chips, large data center parks, gigawatt-level power, and a large amount of cooling water.
Under this logic, the criteria for measuring AI capability have fundamentally changed. Altman explained:
At such a scale, the number of GPUs is meaningless; instead, the power consumed by the entire chip cluster—measured in gigawatts (GW)—has become the only standard for measuring how much effective computing power a company can maintain.
However, the realization of this plan faces significant challenges, including power supply, funding needs, and supply chain bottlenecks. Industry insiders question whether such a massive infrastructure investment is realistic and whether it is worth paying such a high price for the development of artificial intelligence.
Power Demand Equivalent to 250 Nuclear Power Plants
OpenAI's 250GW target implies a tremendous demand on the power system.
A typical nuclear power plant generates about 1GW of power, which means that to support the AI development of OpenAI alone, the equivalent of 250 nuclear power plants' generating capacity needs to be newly built.
In comparison, Microsoft's second-ranked Azure cloud business has a total operational power consumption of only about 5GW for all customers by the end of 2023 According to reports, the power consumption of a large data center used to typically range from 10 to 50 megawatts, but now developers are planning for individual parks to reach thousands of megawatts, comparable to the energy consumption of an entire city.
However, the computing power that Altman refers to goes far beyond electricity. This figure is synonymous with the entire industrial system, including: data centers, chips, cooling and water systems, network fiber optics, and high-speed interconnect devices that connect millions of processors into supercomputers.
The report cites informed sources revealing that the rapidly growing server demand from OpenAI has even surprised executives at its key supplier, NVIDIA.
To address the electricity challenges, OpenAI and its partners are adopting unconventional methods, including building their own power plants instead of waiting for utility companies to provide grid power, or locating facilities in remote areas where energy is more readily available.
The reason is that utility companies are inherently conservative in adding new generation capacity; they are unlikely to take risks in building power plants that could lead to overcapacity based on the demand of a single company.
According to reports, OpenAI is planning a mixed energy solution using natural gas, wind, and solar power in Texas, but this still cannot easily fill the massive gap of hundreds of gigawatts.
Trillion-Dollar Investment and Supply Chain Bottlenecks
In addition to electricity, funding and supply chains are the other two major constraints.
Altman admitted in an internal letter that OpenAI "has already invested hundreds of billions of dollars, and it will take trillions of dollars to do this well." He also stated that it is necessary to "activate the entire global industrial base—energy, manufacturing, logistics, labor, and supply chains."
Currently, OpenAI has already invested heavily in this regard. Reports indicate that before the announcement of the 250GW target, the company had signed contracts to secure about 8GW of computing power by 2028, which itself requires paying cloud service providers like Microsoft hundreds of billions of dollars.
Moreover, estimating the cost of building a 1GW nuclear power plant at about $50 billion, the investment in power facilities alone could reach $12.5 trillion.
Supply chain bottlenecks are equally severe. Supporting such a scale of computing power expansion means needing chip foundry giant TSMC to provide more capacity to produce NVIDIA's GPUs, as well as more equipment from lithography machine manufacturer ASML.
The capacity expansion in these areas is not something that can be achieved overnight; it requires risk investment and collaboration across the entire upstream industry chain. Even if NVIDIA commits to providing financial support for OpenAI's data centers, adding new capacity will still be a daunting process.
OpenAI's "Ambition" is Actually a Gamble
Ultimately, OpenAI's astonishing plan is a gamble based on belief.
Altman and his competitors firmly believe that larger GPU clusters are the only path to more powerful AI models and are key to unlocking AGI and ASI. Just like historical grand projects such as the Hoover Dam and the Apollo program, it is underpinned by a steadfast faith in future technological transformation.
Analysts believe that how investors and the market view this bet depends on their judgment of the future of AI If we believe that super AI can solve human problems like cancer, then trillion-dollar investments are necessary. Conversely, it may become a "massive engineering disaster" recorded in history, similar to the California high-speed rail project.
Regardless of whether the 250GW target can ultimately be achieved, this AI-driven, almost frenzied infrastructure construction boom has already begun. It is reshaping the energy, land, and capital markets in unprecedented ways, while society as a whole seems yet to fully realize the enormous costs and far-reaching impacts behind it.
As Altman himself admits, when people use ChatGPT, few think about the vast construction site shrouded in dust behind it and the industrial power it represents

