OpenAI Faces Backlash Over Sam Altman Defense Department Deal

OpenAI CEO Sam Altman said his company has reached a deal with the United States Department of Defense after its previous contractor Anthropic voiced ethical concerns about the military’s use of its artificial intelligence (AI) technology.
The deal allows OpenAI’s advanced models to be used in highly classified military networks amid rising tensions in the Middle East.
The announcement on X sparked strong backlash, including “Cancel ChatGPT” hashtags, concerns from security experts, and accusations that the move goes against OpenAI’s mission to ensure AGI benefits everyone.
President Trump ordered the U.S. government to stop using the artificial intelligence company Anthropic's products and the Pentagon moved to designate the company a national security risk on Friday, in a sharp escalation of a high-stakes fight over the military's use of AI.
Hours after the president's announcement, rival company OpenAI said it had struck a deal with the Defense Department to provide its own AI technology for classified networks.
Secretary of War Pete Hegseth defended the decision by calling Anthropic a “supply-chain risk to national security,” a label usually used for foreign enemies.
The Guardian says Anthropic refused the Pentagon’s demand to remove safeguards from its Claude AI model, even after being threatened with loss of a $200 million contract and being labeled a national security risk.
OpenAI quickly stepped in after Anthropic withdrew.
Although the administration wanted AI models available for “all lawful purposes,” Sam Altman said OpenAI’s decision was not surrender but a balanced compromise.
He explained during an AMA that the company secured similar safety protections, aligning with laws like the Fourth Amendment and the Posse Comitatus Act, while avoiding a direct confrontation that could have limited the military’s access to advanced AI during conflict.
His comments sparked debate about the ethics of using AI in warfare.
When questioned about shifting from “human betterment” to defense work, Altman responded that national security requires strong tools.
He also emphasized that safeguards are in place to prevent the AI from operating as an autonomous weapon.
The measures include a cloud-only deployment strategy, meaning AI models will not be built into weapons or edge devices, and the use of Field Deployment Engineers to supervise classified operations.
Latest Tech News
Decode Africa's Digital Transformation
From Startups to Fintech Hubs - We Cover It All.
Despite this, critics remained skeptical, pointing to concerns that laws like the USA PATRIOT Act could allow broad data collection.
When asked about AI causing a global catastrophe, Altman said national security partnerships might reduce risks.
He also admitted he has considered the possibility of the federal government nationalizing OpenAI, though he believes it is unlikely.
However, some critics still fear growing government control over advanced AI development.
The implications of this deal extend far beyond a single contract.
By accepting the "supply chain risk" designation previously applied to its competitor, OpenAI has implicitly validated a framework where the government can favor or penalize companies based on their ideological commitment to military utility.
This establishes a concerning precedent, as Altman himself acknowledged, potentially pressuring private companies to compromise their ethical guardrails to avoid being branded a national security threat.
The most debated issue is the “human-in-the-loop” rule.
While OpenAI says humans will make the final decision on the use of force, experts argue that current Defense Department policy is unclear about what meaningful human control really means in fast, AI-driven combat.
If AI can process targets faster than humans can react, critics question whether humans are truly in control or just approving machine decisions.
Concerns about losing control of advanced AI, once theoretical, are now seen as real risks in military settings.
By the end of Altman’s AMA, it was clear that AI is no longer viewed as neutral technology.
OpenAI’s partnership with the Department of War signals a shift, treating advanced AI as a strategic state asset rather than a tool for global public benefit.
You may also like...
Man City vs. Arsenal: Premier League Title Showdown Heats Up!

The Premier League title race culminates this Sunday as Manchester City hosts Arsenal in a high-stakes encounter. Pep Gu...
Osimhen's Fate and Galatasaray's Crucial Gençlerbirliği Clash

Galatasaray's title hopes receive a major boost with Victor Osimhen's return from injury, as manager Okan Buruk likens h...
Stabler's Farewell: Christopher Meloni Bids Goodbye as 'Law & Order: Organized Crime' Gets the Ax!

The Law & Order universe sees major changes as "Organized Crime" is canceled after five seasons, marking Christopher Mel...
Unleashed: 'Avengers: Doomsday' Trailer Stuns CinemaCon With Thor, Doctor Doom, and X-Men Action!

Marvel's "Avengers: Doomsday" unveiled its first trailer at CinemaCon, revealing an expansive crossover event featuring ...
Tate McRae's Meteoric Rise: From Emerging Star to Main Pop Girl Icon

Tate McRae, known for her captivating performances, is honored as Billboard's Women in Music Hitmaker, recognizing her e...
Hacks' Shocking Season 5 Casting: Sci-Fi Icon's Arrival Holds Deeper Meaning!

<i>Hacks</i> Season 5 delves into Deborah Vance's pursuit of legacy, while an unexpected storyline unfolds for Jimmy and...
The Boys Season 5: Explosive Compound V Twist Rewrites Everything!

Season 5, Episode 3 of 'The Boys' escalates the final season's stakes with the introduction of V1, a coveted asset that ...
Malawi Coach's Global Journey: Queens Park Rangers Beckons for Elite Training!

Mighty Wanderers Head Coach Gerald Bob Mpinganjira begins a ten-day professional coaching placement at Queens Park Range...




