
Anthropic claims that Chinese AI company "Distill" Claude is improving its own model
Anthropic stated on Monday that three Chinese artificial intelligence (AI) companies improperly leveraged Claude's capabilities to improve their own models, while also presenting reasons for implementing export controls on chips.
The company reported that DeepSeek, Moonshot, and MiniMax used approximately 24,000 fake accounts to interact with Claude over 16 million times, violating Anthropic's terms of service and regional access restrictions. Anthropic pointed out that they employed a technique known as "distillation," which involves training weaker models using outputs from stronger models.
The company stated that distillation attacks support the rationale for export controls, as limiting chip access can reduce direct model training capabilities and the scope of improper distillation

