---
title: "💢💢💢"
type: "Topics"
locale: "en"
url: "https://longbridge.com/en/topics/39749978.md"
description: "🔥🎯《The New Yorker》's in-depth investigation points directly at the power core of $OpenAI: The Sam Altman controversy is not just a personal issue, but a crack in the governance structure of the AI era. This investigation, based on 100+ interviews, about 70 pages of internal materials from Ilya, and 200+ pages of private notes from Dario, would underestimate its significance if viewed merely as a “personality controversy.” What truly matters is that it reveals a problem: when AI companies reach trillion-dollar levels, has their governance structure already failed to keep pace with the expansion of their power? Let's first look at the core source of the conflict..."
datetime: "2026-04-07T15:55:54.000Z"
locales:
  - [en](https://longbridge.com/en/topics/39749978.md)
  - [zh-CN](https://longbridge.com/zh-CN/topics/39749978.md)
  - [zh-HK](https://longbridge.com/zh-HK/topics/39749978.md)
author: "[辰逸](https://longbridge.com/en/profiles/16318663.md)"
---

# 💢💢💢

🔥🎯 The New Yorker's in-depth investigation points directly at the power core of $OpenAI: The Sam Altman controversy is not just a personal issue, but a crack in the governance structure of the AI era.

This investigation, based on 100+ interviews, approximately 70 pages of internal materials from Ilya, and 200+ pages of private notes from Dario, if viewed merely as a "personnel controversy," would underestimate its significance.

What's truly important is that it reveals a problem:

When an AI company approaches a trillion-dollar valuation, has its governance structure already failed to keep pace with the speed of its power expansion?

First, look at the core source of the conflict.

Ilya Sutskever compiled approximately 70 pages of materials, including Slack records, HR documents, and photos taken with personal devices, and sent them to the board via a "burn after reading" method.

The beginning of this material is a highly pointed summary:

"Sam has a persistent pattern of behavior," with the first item being "Lying."

Meanwhile, Dario Amodei's 200+ pages of private notes recorded over many years reach a similarly direct conclusion:

The core of the problem is not the system, but the person himself.

When two individuals from different sources and different stages provide highly consistent judgments, this is no longer an isolated conflict, but a long-accumulated structural contradiction.

Next, consider the issue of resource allocation.

OpenAI's superalignment team was promised 20% of computing resources.

But the reality was:

Only 1%–2%, and mainly running on the oldest clusters with the worst-performing chips.

Ultimately, this team was disbanded before completing its task.

This set of data itself illustrates one thing:

The priority between safety and capability has already shifted internally.

Now look at another line in the governance structure.

During his removal, Sam Altman directly proposed a new board composition to Satya Nadella, including specific members and investigation arrangements.

Even the new board member responsible for the "independent investigation" was determined after communicating with him.

What does this mean?

It means the structure that should be overseeing the CEO is being inversely influenced by the CEO.

The governance relationship is beginning to form a "closed loop."

The role capital played in this process is equally crucial.

Thrive's planned $86 billion investment was once paused, sending a clear signal:

The deal would only proceed if Sam returned.

This directly altered the internal power dynamics—

Employees' financial incentives were tied to the CEO's fate.

When incentive structures and governance structures overlap, the outcome is often only one:

Decisions are no longer independent.

Looking further back, this pattern is not new.

During the Loopt era, employees twice requested the board to remove Sam;

At Y Combinator, similar complaints also emerged internally, with even evaluations of "long-term opacity and inconsistency."

In other words, this is not a "sudden incident," but a behavioral trajectory spanning many years.

Stringing all these points together leads to a more important conclusion:

The problem is not just Sam Altman.

The problem lies in when an AI company simultaneously possesses these conditions:

Approaching a trillion-dollar valuation expectation  
Possessing key foundational model capabilities  
Participating in government projects (including immigration, surveillance, military applications)

Yet its internal governance still heavily relies on "personal power + informal structures,"

Risks are amplified.

This is also why the significance of this event has already transcended the company itself.

It touches upon a more fundamental question:

When AI becomes an infrastructure-level force,

Can enterprises continue to operate in a "founder-driven" manner?

Or must they shift towards stricter, even more state-level-institution-like governance models?

Which path do you lean towards: Will future AI companies continue to revolve around strong founders, or will they ultimately be replaced by institutionalized governance?

### Related Stocks

- [OpenAI.NA](https://longbridge.com/en/quote/OpenAI.NA.md)
- [DXYZ.US](https://longbridge.com/en/quote/DXYZ.US.md)
- [SAM.US](https://longbridge.com/en/quote/SAM.US.md)