--- title: "Senior AI staffers keep quitting - and are issuing warnings about what's going on at their companies" type: "News" locale: "zh-CN" url: "https://longbridge.com/zh-CN/news/275795004.md" description: "A wave of resignations among senior AI staffers at companies like OpenAI and Anthropic has raised alarms about AI safety concerns. Notable exits include Zoë Hitzig, who publicly criticized OpenAI's strategy in a New York Times essay, and Mrinank Sharma, who expressed difficulties in aligning values with actions. The trend reflects growing anxieties over the rapid pace of AI innovation and its potential risks. Despite ongoing turnover in the industry, the implications of these departures could be significant, especially if safety concerns lead to regulatory challenges affecting financial returns for AI firms." datetime: "2026-02-12T18:30:36.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/275795004.md) - [en](https://longbridge.com/en/news/275795004.md) - [zh-HK](https://longbridge.com/zh-HK/news/275795004.md) --- > 支持的语言: [English](https://longbridge.com/en/news/275795004.md) | [繁體中文](https://longbridge.com/zh-HK/news/275795004.md) # Senior AI staffers keep quitting - and are issuing warnings about what's going on at their companies By Hannah Pedone Through social-media posts and even a resignation letter in the New York Times, former employees of companies like OpenAI and Anthropic are not leaving quietly A stream of staffers at Anthropic, OpenAI and xAI have resigned this week, many due to AI safety concerns. On Wednesday, Zoë Hitzig, a former researcher at OpenAI, said she quit her job at the company after it started testing ads on ChatGPT. Yet instead of just updating her job status on LinkedIn or texting friends and colleagues about her decision, Hitzig announced her resignation in the New York Times - writing a guest essay published yesterday titled "OpenAI is Making the Mistakes Facebook Made. I Quit." Hitzig said that while she doesn't believe ads are "immoral or unethical," she has "deep reservations about OpenAI's strategy." "People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent," she wrote. A flurry of public resignations have gripped the artificial-intelligence industry this week. Safety researchers, co-founders and other insiders at top AI companies have recently chosen to leave. Several did so through heartfelt announcements on X, sounding alarms over AI's existential risks. Others have left their posts more quietly. With AI innovation accelerating, insider exits are increasing anxieties about the velocity of the technological innovations and the serious potential safety impacts. Hitzig and OpenAI did not immediately respond to requests for comment. But Hitzig's resignation letter in the New York Times was reminiscent of Greg Smith's "Why I Am Leaving Goldman Sachs" column in 2012, which tapped into anxieties around the 2008-09 financial crisis and the bailout of big Wall Street banks that followed it. The column led to an appearance on "60 Minutes" and a bestselling book for Smith. As high as the stakes seemed then, the implications of AI could likely be even more meaningful. On Feb. 9, Mrinank Sharma, an AI researcher at Anthropic who led the company's Safeguards research team, announced his resignation. When speaking of his time at Anthropic in a note to colleagues posted on his X account, he said: "I've repeatedly seen how hard it is to truly let our values govern our actions." He wrote in a comment to the post that he will be moving back to the U.K. to let himself "become invisible for a period of time." Sharma did not respond immediately to a request for comment. See also: Despite questions about AI's long-term profitability, OpenAI and Anthropic accelerate investment On Feb.10, Tony Wu, a former xAI co-founder, announced in an X post his resignation from the Elon Musk-led company. Within 24 hours, another xAI co-founder, Jimmy Ba also resigned. "We are heading to an age of 100x productivity with the right tools," Ba wrote in a post on X. He added: "It's time to recalibrate my gradient on the big picture." These prominent exits follow the merger of xAI and SpaceX earlier this month, though details about the reasons for the departures remain unclear. xAI did not immediately respond to a request for comment. Read more: OpenAI reportedly eyeing an IPO by year's end, ahead of Anthropic It's important to note that staff turnover within the AI world has been common for a long time. Ba and Wu's departures come after half of xAI's 12 co-founders left the company in recent years. Jan Leike, a researcher at Anthropic who formerly worked at OpenAI and DeepMind, according to his X bio and website, left OpenAI in 2024 - also while sounding alarms over AI safety concerns. "I joined because I thought OpenAI would be the best place in the world to do this research," he said in a 2024 post on X. On his website, Leike writes that his research focuses on solving the "hard problem of alignment." "How can we train AI systems to follow human intent on tasks that are difficult for humans to evaluate directly?" Leike added in his X post: "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." The recent high-profile exits haven't always been solely initiated by staff. In January, OpenAI fired Ryan Beiermeister, one of the company's top safety executives, after she voiced concerns over the rollout of AI erotica in ChatGPT. The company told Beiermeister that her termination was tied to her sexual discrimination against a colleague, which she denies, according to the Wall Street Journal. Dimitri Zabelin, a senior AI analyst at PitchBook, said in an interview that unless AI safety concerns bring regulatory hurdles that could meaningfully impact financial returns for AI companies, alarm bells over safety are unlikely to change general corporate or investment direction. "\[T\] he topic of AI safety has not merited a sufficient level of concern amongst investors that would meaningfully alter fundraising trends and capital inflows," he said. Zabelin noted that if resignations of technical staff begin to affect AI models' ability to operate well, "then you could see that begin to be reflected in investment flows and subsequent valuations." \-Hannah Pedone This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal. (END) Dow Jones Newswires 02-12-26 1330ET ### 相关股票 - [OpenAI (OpenAI.NA)](https://longbridge.com/zh-CN/quote/OpenAI.NA.md) ## 相关资讯与研究 - [China issues new safety rules for OpenClaw. Here are the dos and don’ts](https://longbridge.com/zh-CN/news/278837384.md) - [OpenAI recruits the "father of lobsters" to prepare for the largest AI IPO in history.](https://longbridge.com/zh-CN/news/278812215.md) - [OpenAI sued over Canada school shooting](https://longbridge.com/zh-CN/news/278588303.md) - [Global Anti-Scam Alliance Launches Scam.org with OpenAI and Key Partners](https://longbridge.com/zh-CN/news/278859591.md) - [OpenAI acquires Promptfoo to secure its AI agents](https://longbridge.com/zh-CN/news/278430410.md)