The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
 
26
 
27
 
28
 

Anthropic Rallies Industry to Combat AI Model Theft

Tags: google
DATE POSTED:February 23, 2026

Anthropic said Monday (Feb. 23) that the Chinese artificial intelligence labs DeepSeek, MiniMax and Moonshot AI have illicitly used the outputs of its AI model Claude to train their own models.

The firms used a technique called “distillation,” in which the outputs of a strong model are used to train a less capable one. When competitors use this technique illicitly, they can acquire capabilities from other labs in much less time and at much less cost than they could develop those capabilities themselves, Anthropic said in the post.

The company said it identified a total of 24,000 fraudulent accounts through which the three labs, in separate but similar campaigns, generated 16 million exchanges with Claude, in violation of Anthropic’s terms of service and regional access restrictions.

Neither DeepSeek, MiniMax nor Moonshot AI immediately replied to PYMNTS’ request for comment.

Anthropic said in its post that illicit distillation campaigns could create AI models that lack necessary safeguards, could enable authoritarian governments to deploy frontier AI, and could undermine export controls meant to help maintain America’s lead in AI.

The company is combating distillation attacks by building tools to detect them, sharing intelligence with other companies, strengthening access controls for the sorts of accounts that are most often used in these attacks, and developing countermeasures, according to the post.

“These campaigns are growing in intensity and sophistication,” Anthropic said in the post. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global AI community.”

Google Threat Intelligence Group (GTIG) said in a Feb. 12 blog post that it has seen a growing incidence of distillation attacks or “model extraction attacks.”

During 2025, GTIG and Google DeepMind identified and disrupted model extraction attacks that came from researchers and private sector companies from around the world.

“Organizations that provide AI models as a service should monitor API access for extraction or distillation patterns,” GTIG said in its post. “For example, a custom model tuned for financial data analysis could be targeted by a commercial competitor seeking to create a derivative product, or a coding model could be targeted by an adversary wishing to replicate capabilities in an environment without guardrails.”

The post Anthropic Rallies Industry to Combat AI Model Theft appeared first on PYMNTS.com.

Tags: google