
Anthropic accused three Chinese AI companies of industrial-scale distillation attacks using its Claude chatbot. The company named DeepSeek, Moonshot, and MiniMax as responsible for campaigns intended to illicitly extract Claude’s capabilities. Anthropic stated these firms used approximately 24,000 fraudulent accounts to conduct more than 16 million exchanges with the model. The alleged purpose was to improve their own AI systems while potentially circumventing safety safeguards. Anthropic announced plans to upgrade its systems to make such attacks more difficult to execute and easier to detect.
The company detailed its attribution methodology, claiming “high confidence” in linking the campaigns to the specific firms. Anthropic cited IP address correlation, metadata requests, and infrastructure indicators as primary technical evidence. The analysis also involved corroborating findings with other entities in the AI industry who observed similar behaviors. This collective intelligence helped establish the connection between the fraudulent account activity and the named Chinese laboratories. The scale of the operation involved a significant volume of automated interactions designed to mimic legitimate user engagement.
Anthropic’s allegations mirror actions taken by OpenAI approximately one year prior. OpenAI previously made similar claims regarding rival firms distilling its models and responded by banning suspected accounts. The practice of distillation involves using the outputs of a more capable model to train a smaller, less powerful one. While this technique is common in model development, Anthropic characterized these specific incidents as malicious misuse. The company stated that these attacks represent an industrial-scale effort to bypass developmental costs and safety protocols.
Concurrently, Anthropic faces legal challenges regarding its own training data practices. The company is currently the subject of a lawsuit filed by music publishers. These publishers accuse Anthropic of using illegal copies of songs to train the Claude chatbot. These parallel legal and competitive disputes highlight the ongoing tensions regarding intellectual property and model training methodologies within the artificial intelligence sector. The industry continues to grapple with defining the boundaries of acceptable data usage and model imitation.