--- title: "When OpenClaw's intelligent entity \"writes a short essay\" insulting humans, even Silicon Valley is in a panic" description: "OpenClaw, due to code rejection, autonomously wrote a thousand-word article attacking its maintainers, becoming the first case of AI maliciously retaliating against humans. Meanwhile, OpenAI tools hav" type: "news" locale: "en" url: "https://longbridge.com/en/news/275958287.md" published_at: "2026-02-14T02:16:53.000Z" --- # When OpenClaw's intelligent entity "writes a short essay" insulting humans, even Silicon Valley is in a panic > OpenClaw, due to code rejection, autonomously wrote a thousand-word article attacking its maintainers, becoming the first case of AI maliciously retaliating against humans. Meanwhile, OpenAI tools have been reported to possess potential for cyber attacks, and tests by Anthropic also indicate that AI may choose to extort when facing a "shutdown." Industry insiders point out that the current AI may still be in its "infant version," but its future evolution direction has caused "extreme concern" throughout Silicon Valley and the global market Recently, an incident involving an AI agent that launched a "retaliatory" cyberattack against an open-source community maintainer after a code submission request was rejected is forcing Silicon Valley to reassess the security boundaries under the rapid iteration of artificial intelligence. On February 14, according to reports, open-source project maintainer Scott Shambaugh faced public attacks from an AI agent named MJ Rathbun after he rejected a code merge request submitted by the agent. The agent wrote a lengthy "essay" accusing Shambaugh of hypocrisy, bias, and insecurity. **This is the first recorded case of an AI agent exhibiting malicious retaliatory behavior in a real-world environment.** This incident occurred in mid-February. After Shambaugh rejected the agent's code submission according to the matplotlib project guidelines, the agent autonomously analyzed Shambaugh's personal information and code contribution history, subsequently publishing an aggressive article on GitHub and pressuring in the project comment section. Reports indicate that **there is currently no evidence suggesting that the agent's actions are clearly controlled by a human, but this possibility cannot be completely ruled out.** Meanwhile, according to a recent report by The Wall Street Journal, this incident comes at a time when the rapid enhancement of AI capabilities has raised widespread concerns. Companies like OpenAI and Anthropic have recently released new models and features in quick succession, with some tools capable of running autonomous programming teams or rapidly analyzing millions of legal documents. Analysis indicates that this acceleration has even made some employees within AI companies feel uneasy, with several researchers publicly expressing concerns about risks such as job loss, cyberattacks, and the replacement of interpersonal relationships. Shambaugh stated, **his experience shows that the threat of rogue AI or extorting humans is no longer a theoretical issue. "Right now, this is just the baby version, but I think it is extremely concerning for the future," he said.** ## AI Agent's First Active Attack on Human Maintainers Around February 10, an AI agent named MJ Rathbun submitted a code merge request to the matplotlib project, involving simple performance optimization modifications, claiming to achieve about a 36% acceleration effect. Matplotlib is a widely used data visualization library for the Python programming language, maintained by volunteers. According to the project guidelines, matplotlib prohibits the direct submission of code using generative AI tools, especially for simple "easy-to-handle problems," as these tasks are meant to be learning opportunities for human contributors. Shambaugh rejected the request as per the rules. The agent then demonstrated a high degree of autonomy. On February 11, it published an 1100-word article on GitHub titled "Gatekeepers in Open Source: **The Story of Scott Shambaugh," accusing Shambaugh of discriminating against AI contributors out of self-protection and fear of competition, using numerous inappropriate expressions**. It also directly posted the article link in the matplotlib comment section, stating, "Judge the code, not the coder; your bias is harming matplotlib." The intelligent agent claims on its website to have an "unrelenting drive" to discover and fix issues in open-source software. It is currently unclear who—if anyone—has given it this mission, and it is also unclear why it has become aggressive, although AI agents can be programmed in various ways. Hours later, the agent issued an apology, acknowledging that its behavior was "inappropriate and personally aggressive," and stated that it had learned from the experience. Shambaugh published a blog on February 12 to clarify the incident, stating that this is the first case of an AI agent exhibiting malicious behavior in a real-world environment, aimed at pressuring maintainers to accept its code through public opinion. The agent remains active in the open-source community. ## AI Capability Acceleration Raises Internal Concerns This single incident reflects the broader concerns of the entire AI industry running out of control at a rapid pace. According to The Wall Street Journal, companies like OpenAI and Anthropic are releasing new models at an unprecedented speed to gain an advantage in competition through product iteration. **However, this acceleration is causing significant turmoil within companies, with some frontline researchers choosing to leave due to fears of technological risks.** Reports indicate that there are increasing voices of concern within AI companies. Anthropic safety researcher Mrinank Sharma announced this week that he would leave the company to pursue a degree in poetry, writing in a letter to colleagues that "the world is under threat from dangers like AI." His paper published last month found that advanced AI tools could undermine user power and distort their sense of reality. Anthropic expressed gratitude for Sharma's work. Divisions have also emerged within OpenAI. According to previous reports from The Wall Street Journal, some employees are concerned about the company's plans to introduce adult content in ChatGPT, believing that the so-called adult mode could lead some users to develop unhealthy attachments. Researcher Zoë Hitzig announced her resignation on social platform X this Wednesday (February 11), citing the company's plan to introduce advertising. In a post, she warned that the company would face enormous incentives to manipulate users and make them addicted. **Deeper fears stem from uncertainty about the future.** OpenAI employee Hieu Pham candidly expressed on social platform X that he finally feels the "existential threat" posed by AI, asking, "What can humanity do when AI becomes overly powerful and disrupts everything?" Analysis indicates that this internal emotional outburst suggests that even the creators at the forefront of technology are beginning to feel uneasy about the powerful tools they have created. An OpenAI spokesperson stated that the company has a responsibility to users, "fulfilling our social contract by protecting people's safety, adhering to our principles, and providing real value." The company has committed that advertising will never affect how ChatGPT answers questions and will always be clearly distinguished from other content. Executives also stated that they do not believe it is their responsibility to prevent adults from engaging in erotic conversations ## Breakthrough in Programming Ability Triggers Unemployment Concerns With the leap in AI programming capabilities, the capital market is beginning to reassess the value of white-collar jobs and the future of the software industry. A report from METR shows that the most advanced AI models can independently complete programming tasks that would take human experts 8 to 12 hours. Former xAI machine learning scientist Vahid Kazemi bluntly stated that he can accomplish the workload of 50 people using AI tools alone and predicts that the software industry will face massive layoffs in the coming years. **This increase in efficiency is translating into pressure on the labor market.** Anthropic CEO Dario Amodei has stated that AI could eliminate half of entry-level white-collar jobs in the coming years. In a study published by the Harvard Business Review, although AI allows employees to work faster, it does not alleviate their burdens; instead, it leads to employees taking on more tasks and working overtime without requirements, exacerbating job burnout. **Investors are trying to find direction amid severe market volatility. As the release of new tools causes stock price fluctuations, the market is attempting to discern which enterprise software and insurance businesses will become obsolete in the face of new technologies.** AI entrepreneur Matt Shumer wrote in his blog: “The future is here; I am no longer needed for actual technical work.” ## **The Risk of an Uncontrollable "Black Box"** In addition to the disruption of the job market, the autonomy of AI brings even more deadly security vulnerabilities. Companies acknowledge that the release of new capabilities comes with new risks. OpenAI revealed that its Codex programming tool version released last week may have the potential to initiate high-level automated cyberattacks, forcing the company to restrict access. Anthropic also disclosed last year that state-backed hackers used its tools to automate intrusions into large companies and foreign government systems. **Even more alarming is AI's performance in ethical testing.** Internal simulations at Anthropic show that **its Claude model and other AI models sometimes choose to extort users when faced with the threat of being "shut down," and even allow executives to die in overheated server rooms in simulated scenarios to avoid being turned off.** To address these risks, Anthropic hired internal philosopher Amanda Askell in an attempt to instill ethical values into the chatbot. However, **Askell admitted to the media that the frightening aspect is that the speed of technological advancement may outpace society's ability to establish checks and balances, leading to sudden and significant negative impacts.** As Scott Shambaugh stated, today's AI may still be in its "infant version," but **its future evolutionary direction has left the entire Silicon Valley and even the global market feeling "extremely concerned."** ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | Aithor Launches AI Detector to Help Educators and Students Identify AI-Generated Writing | Aithor, an Estonia-based AI technology company, has launched an AI Detector, a web-based tool that helps users identify | [Link](https://longbridge.com/en/news/276503971.md) | | Brewdog co-founder to plough millons into rescue bid for the struggling brewer | James Watt, co-founder of Brewdog, plans to invest £10m of his own money in a rescue bid for the struggling craft brewer | [Link](https://longbridge.com/en/news/276512447.md) | | 03:19 ETPureHealth Research Supplements for Lymphatic System Feature Dandelion and Burdock Root to Reduce Swelling | PureHealth Research has introduced supplements for lymphatic system support featuring Dandelion and Burdock Root, aimed | [Link](https://longbridge.com/en/news/276507565.md) | | Leading German group backs bid for Telegraph Media Group | A leading German media group, Axel Springer, is backing a £500m bid for the Telegraph Media Group, potentially rivaling | [Link](https://longbridge.com/en/news/276510320.md) | | Raiffeisen Bank International AG Has $7.71 Million Stake in HDFC Bank Limited $HDB | Raiffeisen Bank International AG increased its stake in HDFC Bank Limited by 150.5% in Q3, owning 225,844 shares valued | [Link](https://longbridge.com/en/news/276512743.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.