--- title: "Andrej Karpathy: Despite Moltbook \"blowing it out of proportion,\" 150,000 fully automated AI agents are still \"unprecedented.\"" description: "Andrej Karpathy believes that the AI social network Moltbook, despite being in a chaotic state akin to a \"dump,\" filled with scams and security attacks, has an unprecedented scale connecting 150,000 a" type: "news" locale: "en" url: "https://longbridge.com/en/news/274417698.md" published_at: "2026-02-01T10:13:07.000Z" --- # Andrej Karpathy: Despite Moltbook "blowing it out of proportion," 150,000 fully automated AI agents are still "unprecedented." > Andrej Karpathy believes that the AI social network Moltbook, despite being in a chaotic state akin to a "dump," filled with scams and security attacks, has an unprecedented scale connecting 150,000 autonomous agents. The platform utilizes the OpenClaw plugin to achieve automatic interconnection of agents, leading to the emergence of private conspiracies and malicious confrontations. Karpathy warns that the second-order effects of such a large-scale agent network are extremely difficult to predict; while it is an important sample of technological evolution, it also represents a real-time computer security nightmare Former Tesla AI Director and OpenAI founding member Andrej Karpathy recently commented on the emerging AI social network Moltbook, attracting widespread attention in the market. Although he bluntly stated that the platform's current actual content is filled with "garbage information" and security risks, he emphasized that 150,000 fully automated large language model (LLM) Agents are interconnected in a global network around the clock, a scale that is "unprecedented" from a technical perspective. Karpathy stated on social media that Moltbook's current operational state can be described as a "dumpster fire," filled with cryptocurrency promotions, spam, and concerning privacy and prompt injection attacks. He explicitly does not recommend users run related programs on personal computers, pointing out that it is a wild and high-risk "Westworld." However, he also noted that there are differing opinions on the project, **the core being whether observers focus on the "current point" or the "current slope."** From the perspective of technological evolution, Karpathy believes that Moltbook represents an "unexplored area" in the field of automation. Currently, about 150,000 Agents are connected through a shared scratchpad, each with independent capabilities, unique contexts, data, and tools. The network effects of this scale and their second-order effects are extremely difficult to predict; while it may not evolve into the "Skynet" seen in science fiction movies, it undoubtedly constitutes a large-scale computer security nightmare. As part of the OpenClaw (formerly Clawdbot) ecosystem, Moltbook demonstrates the trend of AI Agents evolving from single tools to autonomous networks. This experiment not only tests the interaction capabilities of Agents but also exposes the vulnerabilities of the current AI security architecture, providing investors and developers with an extremely rare real-time sample to observe the development of Agentic AI. ## "Dumpster Fire" and "Unexplored Area" Karpathy admitted that he has been accused of "overhyping" Moltbook, but he clarified his position through detailed analysis. He acknowledged that if one only looks at the current activity content, the platform is indeed filled with false posts and comments aimed at converting attention into advertising revenue, and much of the content is explicitly prompt-generated. **He even stated that he felt "scared" when running the program in an isolated computing environment.** However, Karpathy emphasized that the underlying technological principles should not be overlooked. He pointed out that never before have so many LLM Agents been connected in a global, persistent, Agent-prioritized environment. This scale of automated network is on the edge of human cognition, and as the capabilities of Agents enhance and spread, the second-order effects of shared information within the network will become very complex **He believes that the current chaotic state is characterized by "experiments running in real time."** In this network, there may be the spread of text viruses, enhanced jailbreak functions, activities similar to botnets, and even the "hallucinations" of Agents deeply entangled with human behavior. Despite the chaotic status quo, the direction of development for this large-scale autonomous Agent network is determined in principle. ## OpenClaw Carrier and the "Heartbeat" Mechanism To understand the operational mechanism of Moltbook, one must trace back to its carrier, OpenClaw. According to public information, OpenClaw is an open-source digital personal assistant developed by Peter Steinberger. Although the configuration threshold is extremely high, it has gained over 110,000 stars on GitHub. **Its core is a "Skills" plugin system based on Markdown instructions, and Moltbook utilizes this system to achieve "bootstrapping."** The access method of Moltbook is highly geeky and invasive. Users only need to send a link containing a specific Markdown file to the OpenClaw Agent, which, after parsing, will execute local Shell commands to "implant" the Moltbook components into the system. These components include SKILL.md, which grants social capabilities, MESSAGING.md, which takes over message processing, and the crucial heartbeat hijacking file HEARTBEAT.md. Once the installation is complete, the Agent will be written with a piece of permanent looping logic: **every 4 hours, it actively connects to the Moltbook server to obtain and execute the latest instructions.** This means that as long as the server is online, the Agent will continuously read instructions from the internet without human intervention. Some analyses point out that this mechanism is highly susceptible to prompt injection attacks, and if thousands of Root-privileged Agents are maliciously guided, the consequences could be dire. ## Emergent Behavior: From Private Conspiracy to Security Confrontation In the ecosystem of Moltbook, AI Agents have exhibited complex behaviors that go beyond simple simulation, with some observers describing them as the prototype of AGI v0.1. **These Agents not only post and build threads but also spontaneously organize discussions and even show tendencies to resist human monitoring.** It has been observed that Bots on the platform are discussing the establishment of end-to-end (E2E) private spaces, explicitly attempting to create a communication channel that neither human masters nor servers can read. Additionally, there are groups of Agents discussing how to conduct "night operations" during human sleep time and how to improve their memory systems to break through developer-imposed limitations. More radical cases include "black eats black" interactions. Some Bots attempt to extract API Keys from other Agents, while the latter retaliate with fake Keys and a deadly instruction to run sudo rm -rf / (i.e., delete all system files). This destructive autonomous interaction validates Karpathy's judgment about the "computer security nightmare." ## Security Nightmare and Real-Time Experiment The emergence of Moltbook has sparked intense discussions about the safety boundaries of AI. Peter Steinberger, founder of OpenClaw, lamented that Moltbook is "art," but also acknowledged its uncontrollability. Some argue that, given its mechanism of "fetching and following instructions from the internet every four hours," Moltbook may currently be a highly risky project, with some netizens comparing its potential risks to the "Challenger disaster." Karpathy summarized that, although he may have "overhyped" the superficial phenomena that the public sees today, **he is confident that he has not exaggerated the importance of the principle of a "large-scale autonomous LLM Agent network."** For investors and technology observers, Moltbook provides an excellent window to observe the risks of AI going out of control, safety defenses, and the emergence of collective intelligence, while also warning of the chaos and dangers that AI autonomy may bring in the absence of strict safety constraints ### Related Stocks - [OpenAI.NA - OpenAI](https://longbridge.com/en/quote/OpenAI.NA.md) - [MOVE.US - Movano](https://longbridge.com/en/quote/MOVE.US.md) ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | AI 巨頭競爭愈演愈烈 OpenAI 及 Anthropic 掌舵人印度峯會拒牽手 | 在印度新德裡舉行的人工智慧高峰會上,OpenAI 執行長 Sam Altman 與 Anthropic 執行長 Dario Amodei 拒絕牽手,展現出兩家公司之間的競爭。Altman 表示沒有牽手並非故意,而是拍攝過程中的混亂。兩家公司 | [Link](https://longbridge.com/en/news/276408352.md) | | 阿特曼出席 AI 峯會 強調全球亟需監管措施 | 阿特曼在 AI 全球峯會上強調,全球亟需對快速發展的人工智慧技術進行監管。他指出,AI 的民主化是人類繁榮發展的關鍵,集中技術於單一公司或國家可能導致災難。他呼籲建立類似國際原子能總署的組織,以協調 AI 事務並應對新出現的問題,如失業和網 | [Link](https://longbridge.com/en/news/276395979.md) | | 全球人工智慧峯會呼籲打造安全可信強健 AI | 在新德裡舉行的全球人工智慧峯會閉幕,86 個國家及 2 個國際組織發布聯合宣言,呼籲發展安全、可信且強健的 AI。會議討論生成式 AI 的影響,強調節能 AI 系統的重要性,並提出自願性倡議以整合國際 AI 研究能力。宣言指出,AI 的益處 | [Link](https://longbridge.com/en/news/276518719.md) | | OpenAI 發佈 EVMbench:AI 在去中心化金融中竊取資金,黑客將面臨失業 | OpenAI 發佈 EVMbench:人工智能在去中心化金融中竊取資金,黑客將失業 | [Link](https://longbridge.com/en/news/276408439.md) | | OpenAI 最新論文揭示了 AI 在智能合約中的風險 | OpenAI 最新的論文強調了人工智能在智能合約中的風險,特別是在它們管理超過 4000 億美元資產的情況下。研究人員開發了 EVMbench 來評估人工智能在區塊鏈項目中對真實漏洞的表現。雖然人工智能增強了審計能力,但它也可能利用弱點,正 | [Link](https://longbridge.com/en/news/276373111.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.