TLDR
- Anthropic has publicly accused Chinese AI developers DeepSeek, Moonshot, and MiniMax of orchestrating systematic “distillation attacks” targeting its Claude AI system.
- The operation allegedly involved establishing 24,000 fraudulent user accounts and conducting over 16 million interactions with Claude.
- The companies reportedly leveraged commercial proxy networks to circumvent Anthropic’s geographical restrictions blocking Chinese access to Claude.
- MiniMax allegedly generated the highest activity volume, responsible for more than 13 million of the total 16 million interactions.
- Anthropic urges industry-wide collaboration among AI companies, cloud service providers, and regulatory bodies to address the threat.
Anthropic has publicly charged three Chinese artificial intelligence companies with executing systematic operations designed to siphon proprietary knowledge from its Claude AI platform through a method known as “distillation.”
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
— Anthropic (@AnthropicAI) February 23, 2026
The accused companies are DeepSeek, Moonshot AI, and MiniMax. According to Anthropic, the coordinated campaign spanned approximately 24,000 fake user accounts and resulted in more than 16 million separate interactions with Claude.
In AI development, distillation refers to a technique where a less sophisticated model gains capabilities by analyzing the outputs generated by a more advanced system. While Anthropic acknowledges this as acceptable for internal development purposes, the company argues it crosses ethical and legal boundaries when used by rivals to replicate proprietary technology.
“Distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost,” Anthropic wrote in its blog post on Sunday.
Anthropic’s service agreement explicitly prohibits commercial Claude usage from China. To circumvent these restrictions, the three companies allegedly employed commercial proxy infrastructure to simultaneously operate tens of thousands of Claude accounts through international networks.
After gaining access, the firms reportedly submitted massive quantities of strategically designed prompts intended to extract specific functionalities from Claude. These AI-generated responses were subsequently utilized to train proprietary models or serve as datasets for reinforcement learning procedures.
The campaigns specifically targeted Claude’s most sophisticated capabilities, including autonomous reasoning, tool integration, software development, data interpretation, and computer vision processing.
MiniMax Generated the Highest Volume
Among the three accused organizations, MiniMax represented the most significant offender measured by activity volume. Anthropic reports that MiniMax was solely responsible for over 13 million of the 16 million total interactions recorded.
DeepSeek, Moonshot AI, and MiniMax are all headquartered in China and each maintains valuations in the billions of dollars. All three companies declined to provide statements when contacted.
Anthropic stated it identified the perpetrators through analysis of IP address patterns, examination of request metadata, infrastructure fingerprinting, and intelligence shared by industry collaborators who detected identical actors on competing platforms.
Other AI Companies Report Similar Incidents
Anthropic isn’t the sole American AI developer highlighting this concern. OpenAI submitted a formal letter to United States legislators this month, asserting it had observed patterns “indicative of ongoing attempts by DeepSeek to distill frontier models.”
OpenAI initially raised distillation warnings in early 2024, following DeepSeek’s debut model launch, which observers noted bore striking similarities to ChatGPT.
Anthropic announced plans to strengthen its response through enhanced detection capabilities, stricter access protocols, and expanded threat intelligence sharing with industry partners.
The company is also calling for a coordinated response from the wider AI industry, cloud providers, and policymakers. “No company can solve this alone,” Anthropic wrote.
Industry analysts consulted by CNBC observed that distinguishing between legitimate and illicit distillation practices can be challenging, emphasizing the importance of careful consideration when assessing such allegations.
Anthropic’s blog post stated that MiniMax drove over 13 million exchanges, the highest volume of any single firm involved.



