AI Cold War Heats Up: Anthropic Slams Chinese Labs Amid US Chip Export Tensions

Published 3 hours ago4 minute read
Uche Emeka
Uche Emeka
AI Cold War Heats Up: Anthropic Slams Chinese Labs Amid US Chip Export Tensions

Anthropic has formally accused three prominent Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of orchestrating a massive-scale “distillation” attack against its advanced Claude AI model. The allegations state that these labs established more than 24,000 fake accounts and collectively generated over 16 million exchanges with Claude, specifically targeting its most differentiated capabilities, including agentic reasoning, tool use, and coding, with the aim of improving their own artificial intelligence models.

This accusation comes amidst ongoing debates regarding the enforcement of export controls on advanced AI chips, a policy explicitly designed to slow China’s AI development. Distillation is a widely used training method for AI labs to create smaller, more cost-effective versions of their models, but it can also be exploited by competitors to illicitly replicate the advanced functionalities of other leading AI systems. OpenAI had previously sent a memo to House lawmakers, leveling similar accusations against DeepSeek for using distillation to mimic its own products.

DeepSeek gained significant attention a year ago with the release of its open-source R1 reasoning model, which demonstrated performance comparable to American frontier labs at a fraction of the cost. The company is reportedly preparing to launch DeepSeek V4, its latest model, which is rumored to surpass the coding capabilities of both Anthropic’s Claude and OpenAI’s ChatGPT.

Anthropic's investigation detailed the scope of each alleged attack. DeepSeek was tracked generating over 150,000 exchanges, primarily focused on enhancing foundational logic and alignment, particularly in developing censorship-safe alternatives for policy-sensitive queries. Moonshot AI engaged in more than 3.4 million exchanges, targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision capabilities. Last month, Moonshot AI released a new open-source model, Kimi K2.5, and a coding agent. MiniMax conducted the largest number of exchanges, totaling 13 million, with a focus on agentic coding, tool use, and orchestration. Anthropic observed MiniMax actively redirecting nearly half of its traffic to siphon capabilities from the latest Claude model upon its launch.

In response, Anthropic has committed to investing further in defenses to make distillation attacks more difficult to execute and easier to detect. The company is also advocating for a “coordinated response across the AI industry, cloud providers, and policymakers” to combat these illicit activities.

The timing of these distillation attacks further fuels the contentious discussion surrounding American chip exports to China. The Trump administration, last month, formally permitted U.S. companies like Nvidia to export advanced AI chips, such as the H200, to China. Critics argue that this relaxation of export controls inadvertently boosts China’s AI computing capacity during a critical period in the global race for AI dominance. Anthropic asserts that the extensive scale of extraction carried out by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips,” thus reinforcing the necessity for stringent export controls. As stated in Anthropic’s blog, “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.”

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, expressed his lack of surprise at these incidents to TechCrunch. He commented, “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact.” Alperovitch argued that this revelation provides even more compelling reasons to refuse the sale of any AI chips to these companies, as such sales would only further advantage them.

Beyond undermining American AI dominance, Anthropic also highlighted potential national security risks. The company noted that U.S. AI developers integrate safeguards to prevent state and non-state actors from leveraging AI for malicious purposes, such as developing bioweapons or conducting cyberattacks. Models created through illicit distillation are unlikely to retain these critical safeguards, potentially leading to the proliferation of dangerous capabilities with stripped-out protections. Anthropic specifically cautioned that authoritarian governments could deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance, risks that are compounded if such models are open-sourced. TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for their comments on these allegations.

Loading...
Loading...
Loading...

You may also like...