AI's Next Battlefield: Will Future Wars Depend on Algorithms?

Published 5 hours ago6 minute read
Zainab Bakare
Zainab Bakare
AI's Next Battlefield: Will Future Wars Depend on Algorithms?

On February 28, 2026, the United States and Israel launched a coordinated strike campaign against Iran codenamed Operation Epic Fury, the largest concentration of military power in nearly three decades.

Iran Airstrike — Source: Aljazeera

It was a defining moment for modern warfare, not just for its scale, but for what was reportedly running behind the scenes: an artificial intelligence model helping soldiers identify targets, assess intelligence, and simulate battlefield outcomes in real time.

That AI was Claude, made by a company called Anthropic. The same Claude that, just hours earlier, the U.S. government had officially banned.

That contradiction of banning a tool and using it in the same breath, on the same day, in an active war, is more than an irony.

It is a window into exactly how deeply AI has already embedded itself into the machinery of modern conflict.

It Already Happened

According to reports from Reuters citing unnamed sources, U.S. Central Command used Claude during Operation Epic Fury for intelligence assessment, target identification, and battlefield simulations.

This came just nineteen hours after Defense Secretary, Pete Hegseth designated Anthropic a "supply chain risk to national security" and ordered all federal agencies to halt use of its systems, a designation that itself followed days of escalating tension over the Pentagon's demand for unrestricted access to the AI, and Anthropic's refusal to remove its ethical restrictions.

Let that sink in. A government declared a technology company a national security threat, then used that company's technology to help conduct airstrikes.

For its part, Anthropic stated it supports "all lawful uses of AI for national security," while continuing to challenge the supply-chain designation in court.

The Pentagon, meanwhile, announced it would work with Anthropic for up to six more months during a phase-out period, a timeline that conveniently covers much of mid-2026.

The AI Arsenal

What happened over Iran didn't come out of nowhere. AI integration in U.S. military operations has been building for years.

In July 2025, the Department of Defense awarded contracts worth up to $200 million for AI services with access to tools from Anthropic, OpenAI, Google, and Elon Musk's xAI.

They were usage contracts embedded into classified networks, giving warfighters direct access to frontier AI models.

Israel has been on a parallel track. Systems known as "Habsora" and "Lavender," widely reported on during the fighting in Gaza, process large volumes of data to generate and prioritize target banks at a pace no human analyst could match.

A key distinction is that Israeli systems are largely developed internally by military technology units, while the U.S. has leaned heavily on private commercial firms, a dynamic that creates friction, as the Anthropic episode made plain.

The AI arms race is not just American or Israeli. Iran has integrated AI into its own arsenal including AI-augmented unmanned vehicles.

During the broader 2025 Israel-Iran exchanges, AI-generated deepfakes and propaganda videos spread disinformation in real time, complicating an already volatile information environment.

What AI Actually Does in a War Zone

For most people, when they hear "AI in warfare," they have images of autonomous killer robots. The reality today is more procedural and in some ways, more unsettling for being so mundane.

AI systems currently process satellite and sensor data faster than any human team can. They flag anomalies, cross-reference intelligence databases, and surface patterns that analysts might miss over days of review, in seconds.

Whatsapp promotion

They run battle simulations, showing scenarios before a mission launches.

They support cyber operations, both offensive and defensive. And increasingly, they are used to generate and spread or counter disinformation.

The key line that military officials consistently draw is that AI is informing the humans who decide, not deciding itself. That distinction matters legally and ethically.

But it is a thin and narrowing line. When an AI system generates a target list and a human approves it under time pressure, how meaningful is that human judgment, really?

The Ethical Minefield

These are not hypothetical questions. When an AI-assisted strike results in civilian casualties, who is accountable? The algorithm? The company that built it? The soldier who pressed the button? The commander who signed off?

Current international law was not written with autonomous or semi-autonomous targeting in mind.

The Anthropic resistance suggests a deeper tension. AI companies building safety constraints into their models are, in effect, making ethical choices about how their technology can be used in war, choices that governments argue belong to them alone.

Defense Secretary Hegseth called Anthropic's refusal "arrogance and betrayal." Anthropic argued its restrictions are not optional features but fundamental safety commitments.

Both positions reflect genuine, reasonable concerns. Neither fully resolves the question.

Meanwhile, the speed at which AI systems operate outpaces the ability of any chain of command to verify outputs in real time is that gap where mistakes and atrocities are made.

An Arms Race Nobody Voted For

Every major power is now racing to field AI-enabled military capabilities faster than its adversaries.

Smaller states and non-state actors are not far behind. AI lowers the barrier to entry for sophisticated operations, tools once available only to well-resourced militaries are increasingly accessible across the board.

There is no global treaty governing AI in warfare.

There is no agreed upon red line for autonomous weapons. There is no international body with the mandate or the authority to enforce one.

The technology is moving faster than the diplomacy and the ethics, and as February 28 demonstrated, faster than governments' own rules about it.

A Question Worth Sitting With

The story of AI in the Iran strikes isn't really about one AI company, or one executive order, or one operation. It is about the moment humanity crossed into a new era of warfare and kept going, barely pausing to look back.

AI in war is not coming. It is here.

The question is not whether algorithms will shape future conflicts, they already are.

The question is whether the institutions meant to govern armed conflict can move fast enough to keep pace with the technology reshaping it.

If a government cannot enforce its own AI rules for a single day, in peacetime, what happens when the next war starts, and the next, and the one after that?

Whatsapp promotion

That question doesn't have a comfortable answer. But it is the one we need to be asking.


Loading...
Loading...
Loading...

You may also like...