--- title: "Anthropic: No, absolutely not, you may not use third-party harnesses with Claude subs" description: "Anthropic has updated its legal terms to prohibit the use of third-party harnesses with its Claude subscriptions, aiming to protect its revenue model. The company sells subscriptions for its Claude pl" type: "news" locale: "en" url: "https://longbridge.com/en/news/276489849.md" published_at: "2026-02-20T21:56:58.000Z" --- # Anthropic: No, absolutely not, you may not use third-party harnesses with Claude subs > Anthropic has updated its legal terms to prohibit the use of third-party harnesses with its Claude subscriptions, aiming to protect its revenue model. The company sells subscriptions for its Claude platform, which includes various machine learning models and tools. The updated terms clarify that only authorized access methods are allowed, and using OAuth tokens from Claude accounts in other products is forbidden. This move is intended to prevent disintermediation by third-party tools that could undermine Anthropic's business model and user experience. Anthropic this week revised its legal terms to clarify its policy forbidding the use of third-party harnesses with Claude subscriptions, as the AI biz attempts to shore up its revenue model. Anthropic sells subscriptions to its Claude platform, which provides access to a family of machine learning models (e.g. Opus 4.6), and associated tools like Claude Code, a web-based interface at Claude.ai, and the Claude Desktop application, among others. Claude Code is a harness or wrapper – it integrates with the user's terminal and routes prompts to the available Claude model in conjunction with other tools and a control loop that, together, make it what Anthropic calls an agentic coding tool. Many other tools serve as harnesses for models, such as OpenAI Codex, Google Antigravity, Manus (recently acquired by Meta), OpenCode, Cursor, and Pi (the harness behind OpenClaw). Harnesses exist because interacting with a machine learning model itself is not a great user experience – you feed it a prompt and it returns a result. That's a single-turn interaction. Input and output. To create a product that people care about, model makers have added support for multi-turn interaction, memory of prior interactions, access to tools, orchestration to handle data flowing between those tools, and so on. Some of this support has been baked into model platforms, but some of it has been added through harness tooling. This can pose a business problem for frontier model makers – they've invested billions to train sophisticated models, but they risk being disintermediated by gatekeeping intermediaries that build harnesses around their models and offer a better user experience. One of the ways that Anthropic has chosen to build brand loyalty is by selling tokens to subscription customers at a monthly price, with usage limits, that ends up being less costly than pay-as-you-go token purchases through the Claude API. Essentially, the economics are similar to an all-you-can-eat buffet that's priced with certain usage expectations. That practice, effectively a subsidy for subscribers, led to token arbitrage. Customers accessed Claude models via subscriptions linked to third-party harnesses because it cost less than doing the same work via API key. The AI biz's Consumer Terms of Service have forbidden the use of third-party harnesses, except with specific authorization since at least February 2024. The contractual language in Section 3.7, which remains unchanged from that time, says as much – any automated access tool not officially endorsed is forbidden. > You may not access or use, or help another person to access or use, our Services in the following ways: > Except when you are accessing our Services via an Anthropic API Key or where we otherwise explicitly permit it, to access the Services through automated or non-human means, whether through a bot, script, or otherwise. Despite the presence of that passage for more than two years, a variety of third-party tools have flouted that rule and have allowed users to supply a Claude subscription account key. The added rule explicitly states that OAuth authentication, the access method used for Claude Free, Pro, and Max tier subscribers, is only intended for Claude Code and Claude.ai (the web interface for Claude models). **"**Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service," the updated legal compliance page says. According to Anthropic, the update represents an attempt to clarify existing policy language to make it consistent throughout company documentation. - AI coding assistant Cline compromised to create more OpenClaw chaos - Ex-Google engineers accused of helping themselves to chip security secrets - Accenture tells staffers: If you want a promotion, use AI at work - EFF policy says bots can code but humans must write the docs Anthropic appears to have decided to police its rules at the start of the year. In a January social media thread, Anthropic engineer Thariq Shihipar said the company had taken steps to prevent third-party tools from "spoofing the Claude Code harness." "Third-party harnesses using Claude subscriptions create problems for users and are prohibited by our Terms of Service," he wrote. "They generate unusual traffic patterns without any of the usual telemetry that the Claude Code harness provides, making it really hard for us to help debug when they have questions about rate limit usage or account bans and they don't have any other avenue for this support." The prohibition proved unpopular enough to elicit a response from the competition. OpenAI's Thibault Sottiaux pointedly endorsed the use of Codex subscriptions in third-party harnesses. After banning accounts for attempting to game its pricing structure, Anthropic has now clarified its legalese, as Shihipar indicated would happen, and makers of third-party harnesses are taking note. On Thursday, OpenCode pushed code to remove support for Claude Pro and Max account keys and Claude API keys. The commit cites "anthropic legal requests." ® ## Related News & Research | Title | Description | URL | |-------|-------------|-----| | 人工智能在發現漏洞方面已經很擅長,但在解決這些問題上卻不那麼有效 | Anthropic 的 Claude Code 在識別軟件漏洞方面有所改善,但安全專家認為,僅僅發現漏洞是不夠的,如果這些漏洞未得到解決。儘管報告了超過 500 個漏洞,但只有少數得到了修復,這突顯了國家漏洞數據庫中的積壓問題。專家強調,雖 | [Link](https://longbridge.com/en/news/276797658.md) | | Anthropic 控 3 家中企竊取 Claude 模型數據 恐涉國安 | 美國人工智慧公司 Anthropic 指控三家中國 AI 公司(深度求索、月之暗面、MiniMax)非法竊取其聊天機器人 Claude 的技術,稱此為工業規模的智慧財產權竊取。Anthropic 表示,這些公司通過 1600 萬次與 Cla | [Link](https://longbridge.com/en/news/276666727.md) | | 美國戰爭部下最後通牒 威脅終止 Anthropic 軍方合同 另 Anthropic 放寬核心安全政策 | 美國戰爭部向 Anthropic 發出最後通牒,要求其在週五前遵守五角大樓的 AI 模型使用要求,否則將取消合同。Anthropic 為保持競爭力,決定放寬核心安全政策,允許軍方在合法應用場景中使用其 AI 模型。此舉可能影響與軍方合作的多 | [Link](https://longbridge.com/en/news/276826425.md) | | Anthropic 發布新 AI 模型 Claude Sonnet 4.6 幻覺和奉承傾向較低 | Anthropic 發布新 AI 模型 Claude Sonnet 4.6,旨在提升 AI 工具的任務處理效率。該模型能執行多步驟操作,如填寫網頁表單,並在多個瀏覽器標籤間協調信息。Sonnet 4.6 在編程方面更可靠,幻覺和奉承傾向較低 | [Link](https://longbridge.com/en/news/276406320.md) | | 智譜 (2513) 就付費套餐失誤發致歉信,推出多重補償方案 | 智譜就其 GLM Coding Plan 付費套餐問題發出致歉信,承認存在規則透明度不足、灰度推出節奏過慢及舊用户升級機制粗糙等三大問題。為此,公司推出多項補償措施,包括全額退款、延長服務使用期及提供「一鍵回滾」選項。 | [Link](https://longbridge.com/en/news/276555577.md) | --- > **Disclaimer**: This article is for reference only and does not constitute any investment advice.