Military AI Under Scrutiny: Hegseth and Anthropic CEO to Tackle Ethical Quandary

Published 10 hours ago4 minute read
Uche Emeka
Uche Emeka
Military AI Under Scrutiny: Hegseth and Anthropic CEO to Tackle Ethical Quandary

Defense Secretary Pete Hegseth is scheduled to meet with Dario Amodei, the CEO of artificial intelligence company Anthropic, on Tuesday amidst a growing debate over the ethical use of AI in national security. Anthropic, known for its chatbot Claude, stands out among its peers for not supplying its technology to a new U.S. military internal network. The meeting, confirmed by a defense official, highlights the critical discussions surrounding AI’s role in high-stakes military applications, including lethal force and surveillance, and the potential for misuse of sensitive information.

Amodei has consistently articulated his ethical reservations regarding the unchecked deployment of AI by governments. His concerns encompass the dangers posed by fully autonomous armed drones and the potential for AI-assisted mass surveillance to track and suppress dissent. In a recent essay, Amodei warned of the immense power a "powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow."

This dialogue occurs as Secretary Hegseth actively seeks to eliminate what he terms "woke culture" within the armed forces. Hegseth's vision for military AI systems emphasizes operations "without ideological constraints that limit lawful military applications," asserting that the Pentagon’s "AI will not be woke." In January, Hegseth announced that Elon Musk’s AI chatbot Grok, which recently faced scrutiny for generating highly sexualized deepfake images, would join the Pentagon network known as GenAI.mil. OpenAI also committed to joining the military’s secure AI platform, offering a custom version of ChatGPT for unclassified tasks.

The Pentagon had previously awarded defense contracts worth up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and xAI. Notably, Anthropic was the first AI company to gain approval for classified military networks, where it collaborates with entities such as Palantir. However, the other three companies currently operate solely in unclassified environments. By early this year, Hegseth publicly highlighted only xAI and Google, stating his rejection of AI models "that won’t allow you to fight wars."

Anthropic has consistently positioned itself as the more responsible and safety-conscious leader among AI companies, a stance rooted in its founders' departure from OpenAI in 2021 to form the startup. This commitment to safety is now being tested by the company's interactions with the Pentagon. According to Owen Daniels, an associate director at Georgetown University’s Center for Security and Emerging Technology, Anthropic’s peers—including Meta, Google, and xAI—have demonstrated a willingness to comply with the Department of Defense’s policy on utilizing models for all lawful applications. Daniels suggests that Anthropic’s bargaining power may be limited, and the company risks diminishing its influence in the department's broader AI adoption strategy.

In the aftermath of ChatGPT's release, Anthropic aligned closely with President Joe Biden’s administration, voluntarily subjecting its AI systems to third-party scrutiny to mitigate national security risks. Amodei has cautioned against AI’s potentially catastrophic dangers, forecasting that "we are considerably closer to real danger in 2026 than we were in 2023," while advocating for a "realistic, pragmatic manner" in managing these risks. He, however, rejects the label of an AI "doomer."

This is not the first instance where Anthropic's advocacy for stringent AI safeguards has led to conflict. The company previously criticized the Trump administration's proposals to relax export controls on certain AI computer chips to China, though it maintains a close partnership with chipmaker Nvidia. Furthermore, Anthropic and the Trump administration have been on opposing sides concerning lobbying efforts to regulate AI at the state level. David Sacks, Trump’s top AI adviser, accused Anthropic in October of employing a "sophisticated regulatory capture strategy based on fear-mongering" in response to an Anthropic co-founder's remarks on balancing technological optimism with concerns about advancing AI capabilities.

Despite hiring former Biden officials after Trump's return to the White House, Anthropic has also sought to project a bipartisan image, exemplified by the addition of Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

The current contention between the Pentagon and Anthropic echoes past controversies, such as the uproar surrounding Project Maven, a Pentagon drone surveillance program, which saw some tech workers resign and Google withdraw its participation. Owen Daniels notes that despite such objections, the Pentagon’s reliance on drone surveillance has only escalated. He concludes that "the use of AI in military contexts is already a reality and it is not going away," acknowledging that while some applications are lower stakes, "battlefield deployments of AI entail different, higher-stakes risks," particularly involving lethal force or nuclear arms. Military users, he observes, have been aware of these risks and engaged in mitigation planning for nearly a decade.

Loading...
Loading...

You may also like...