Pentagon Labels AI Innovator Anthropic an 'Immediate Supply Chain Risk'

Published 12 hours ago2 minute read
Uche Emeka
Uche Emeka
Pentagon Labels AI Innovator Anthropic an 'Immediate Supply Chain Risk'

The Trump administration has taken the unusual step of designating artificial intelligence company Anthropic as a supply chain risk, a move that could force government contractors to discontinue using the firm’s AI chatbot, Claude AI chatbot.

The Pentagon formally notified the company that the designation would take effect immediately, effectively ending negotiations between the government and the San Francisco–based AI developer.

The decision followed accusations from Donald Trump and Defense Secretary Pete Hegseth that the company’s restrictions on how its technology could be used posed a potential threat to national security.

The standoff intensified after Anthropic CEO Dario Amodei refused to remove safeguards designed to prevent the system from being used for mass domestic surveillance or fully autonomous weapons programs.

In its statement, the Pentagon argued that the military must retain the authority to use technology for all lawful purposes and warned that it would not allow private vendors to limit how critical tools are deployed in defense operations.

Officials said allowing such restrictions could interfere with military decision-making and potentially endanger warfighters.

Under U.S. procurement rules, labeling a company a supply chain risk can restrict the use of its products in military contracts and compel defense contractors to seek alternative providers.

Image credit: Reuters

The designation is particularly significant because the rule is typically used against foreign adversaries suspected of tampering with technology supply chains, not against domestic firms.

Anthropic has rejected the designation and announced plans to challenge it in court, arguing that the government’s action is legally unsound.

Amodei clarified that the company’s proposed safeguards applied only to high-level use cases, specifically prohibiting mass surveillance of Americans and fully autonomous weapons systems, while not interfering with operational military decisions.

He also noted that the designation appears to apply only to the use of Claude within direct Pentagon contracts, meaning most commercial and civilian customers remain unaffected.

The dispute has sparked wider debate across the technology sector, with critics warning that the move could discourage other AI firms from working with the U.S. government while deepening tensions between Silicon Valley and the defense establishment

Recommended Articles

Loading...

You may also like...