--- title: "OpenAI research lead Noam Brown thinks AI ‘reasoning’ models could’ve arrived decades ago" description: "Noam Brown, OpenAI's research lead, stated that AI reasoning models could have emerged 20 years earlier with the right approaches. He emphasized the importance of reasoning in AI, particularly in chal" type: "news" locale: "en" url: "https://longbridge.com/en/news/232472204.md" published_at: "2025-03-19T21:17:06.000Z" --- # OpenAI research lead Noam Brown thinks AI ‘reasoning’ models could’ve arrived decades ago > Noam Brown, OpenAI's research lead, stated that AI reasoning models could have emerged 20 years earlier with the right approaches. He emphasized the importance of reasoning in AI, particularly in challenging situations. Brown discussed the complementary nature of pre-training and test-time inference in AI development. He also highlighted the potential for collaboration between academia and AI labs, especially in areas like AI benchmarking, which he believes could significantly improve the current state of AI assessments. Noam Brown, who leads AI reasoning research at OpenAI, says “reasoning” AI models like OpenAI’s o1 could’ve arrived 20 years earlier had researchers “known \[the right\] approach” and algorithms. “There were various reasons why this research direction was neglected,” Brown said during a panel at Nvidia’s GTC conference in San Jose on Wednesday. “I noticed over the course of my research that, OK, there’s something missing. Humans spend a lot of time thinking before they act in a tough situation. Maybe this would be very useful \[in AI\].” Brown is one of the principal architects behind o1, an AI model that employs a technique called test-time inference to “think” before it responds to queries. Test-time inference entails applying additional computing to running models to drive a form of “reasoning.” In general, so-called reasoning models are more accurate and reliable than traditional models, particularly in domains like mathematics and science. Brown stressed, however, that pre-training — training ever-larger models on ever-larger datasets — isn’t exactly “dead.” AI labs including OpenAI once invested most of their efforts in scaling up pre-training. Now, they’re splitting time between pre-training and test-time inference, according to Brown — approaches that Brown described as complementary. Brown was asked during the panel whether academia could ever hope to perform experiments on the scale of AI labs like OpenAI, given institutions’ general lack of access to computing resources. He admitted that it’s become tougher in recent years as models have become more computing-intensive, but that academics can make an impact by exploring areas that require less computing, like model architecture design. “\[T\] here is an opportunity for collaboration between the frontier labs \[and academia\],” Brown said. “Certainly, the frontier labs are looking at academic publications and thinking carefully about, OK, does this make a compelling argument that, if this were scaled up further, it would be very effective. If there is that compelling argument from the paper, you know, we will investigate that in these labs.” Brown’s comments come at a time when the Trump administration is making deep cuts to scientific grant-making. AI experts including Nobel Laureate Geoffrey Hinton have criticized these cuts, saying that they may threaten AI research efforts both domestic and abroad. Brown called out AI benchmarking as an area where academia could make a significant impact. “The state of benchmarks in AI is really bad, and that doesn’t require a lot of compute to do,” he said. As we’ve written about before, popular AI benchmarks today tend to test for esoteric knowledge, and give scores that correlate poorly to proficiency on tasks that most people care about. That’s led to widespread confusion about models’ capabilities and improvements. ### Related Stocks - [OpenAI.NA - OpenAI](https://longbridge.com/en/quote/OpenAI.NA.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | AI 巨頭競爭愈演愈烈 OpenAI 及 Anthropic 掌舵人印度峯會拒牽手 | 在印度新德裡舉行的人工智慧高峰會上,OpenAI 執行長 Sam Altman 與 Anthropic 執行長 Dario Amodei 拒絕牽手,展現出兩家公司之間的競爭。Altman 表示沒有牽手並非故意,而是拍攝過程中的混亂。兩家公司 | [Link](https://longbridge.com/en/news/276408352.md) | | 阿特曼出席 AI 峯會 強調全球亟需監管措施 | 阿特曼在 AI 全球峯會上強調,全球亟需對快速發展的人工智慧技術進行監管。他指出,AI 的民主化是人類繁榮發展的關鍵,集中技術於單一公司或國家可能導致災難。他呼籲建立類似國際原子能總署的組織,以協調 AI 事務並應對新出現的問題,如失業和網 | [Link](https://longbridge.com/en/news/276395979.md) | | OpenAI 夥塔塔集團印度拓 AI 基建 有望建成印度最大數據中心 | OpenAI 與印度塔塔集團合作開發 AI 基礎設施,計劃建設印度最大數據中心。TCS 將建造一座 100 兆瓦的數據中心,未來可能擴展至 1 吉瓦,預計投資在 350 億至 500 億美元之間。OpenAI CEO 奧特曼正在印度參加 A | [Link](https://longbridge.com/en/news/276312385.md) | | DeepSeek 灰度測試新一代模型,野村: 訓練與推理成本下降或緩解盈利壓力 | DeepSeek 正在進行新一代模型的灰度測試,預計本月中旬推出 V4 模型。該模型在上下文長度和核心能力上有顯著提升,野村證券認為 V4 將通過底層架構創新推動 AI 應用商業化,而非顛覆現有價值鏈。V4 的發布預計將顯著降低訓練與推理成 | [Link](https://longbridge.com/en/news/275723457.md) | | 全球人工智慧峯會呼籲打造安全可信強健 AI | 在新德裡舉行的全球人工智慧峯會閉幕,86 個國家及 2 個國際組織發布聯合宣言,呼籲發展安全、可信且強健的 AI。會議討論生成式 AI 的影響,強調節能 AI 系統的重要性,並提出自願性倡議以整合國際 AI 研究能力。宣言指出,AI 的益處 | [Link](https://longbridge.com/en/news/276518719.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.