Learning AI #11

How should AI be used in war?

To listen to this story, click Listen Online at the top of the page.

You can find previous stories here.

Should AI control warfare decisions? 

Incorporating AI into battle plans is not a simple proposition. There are three basic models for how to use AI in warfare decision-making:

  1. Human-in-the-loop. AI advises, humans decide

  2. Human-on-the-loop. AI acts, humans supervise and can override

  3. Human-out-of-the-loop. Al makes decisions autonomously

The current state-of-the-art is somewhere between humans being in- or on- the loop. Things get much trickier when we take humans out-of-the-loop and hand over life-or-death decisions to a machine acting autonomously.

AI can suggest targets and process data faster than humans, but having a human with a finger on the trigger, rather than it being automated, allows for considerations like civilian harm or political consequences.

 Here is an excerpt from my book that brings the key issues into focus

In 1983, at the peak of the Cold War, Soviet Colonel Stanislav Petrov was the officer on duty when the Soviet’s early warning system indicated a launch of missiles from the U.S. that was targeted at the Soviet Union. The system told him five missiles were inbound. The missiles, each with multiple nuclear warheads, would strike and detonate in the Soviet Union in less than 20 minutes.

The way the Soviet system worked at the time, if Petrov followed protocol and immediately reported the incoming missiles up the chain to the Soviet command, the automatic response would be a full-scale nuclear retaliatory strike on the U.S. The end of the world. Not cool.

Petrov thought, “Why just five missiles and not an all-out strike from the U.S. on the Soviet Union?” Five missiles would be akin to the U.S. “poking the bear,” and the Soviets would respond with overwhelming force. Strategically, the five-missile attack made no sense to Petrov.

Warfare expert Jeffrey Lewis from an NPR interview:

"[Petrov] just had this feeling in his gut that it wasn't right. It was five missiles. It didn't seem like enough. So even though by all of the protocols he had been trained to follow, he should absolutely have reported that up the chain of command and, you know, we should be talking about the great nuclear war of 1983 if any of us survived."

As it turned out, there was a reflection from the sun off the clouds that the early-warning system interpreted as incoming missiles. Once verified, there was no need for Petrov to pull the trigger.

 

Autonomous AI would not have hesitated in launching the retaliatory strike against the U.S. In effect, Petrov was the human-on-the-loop of the Soviet decision-making machinery.

AI would not have realized the context (the end of the world) but would have just worked through its algorithm much like Petrov did, except the AI does not have compassion or the ability to make ethical judgements.

AI can augment wartime capabilities, but we are not ready for AI to act autonomously

The recent dismissal of Anthropic (maker of Claude) by the Department of War was because DOW wanted the ability for Claude to run in a fully autonomous manner when making some wartime decisions.

Anthropic said “no” since it realized a human-out-of-the-loop scenario was untested and extremely dangerous.

 In a fully autonomous situation, who is accountable when the AI, acting on its own, directs a missile strike on a school it thought was an enemy compound? Do we blame the software developer? DOW? You can see the problems.

An algorithm cannot be court-martialed, tried for war crimes, or held accountable in any meaningful sense.

The new arms race with China, Russia, and others is not to build the first atomic bomb or cruise missile with the longest range. The race is for harnessing the power of AI in the use of lethal force.

Whichever country that can best embed AI to amplify its military brain power will likely be the last man standing in a worldwide conflict.

 

Things I think about

Nintendo was founded in 1889, originally as a playing card company.

Response from ChatGPT

Should AI control warfare decisions? 

Short answer: no—AI should not have full control over warfare decisions. But it will and should play a role as a support tool. The real question is how much control we allow it to have.

The Core Issue: Control vs. Assistance

There’s a spectrum:

  • Human-in-the-loop → AI advises, humans decide

  • Human-on-the-loop → AI acts, humans supervise and can override

  • Human-out-of-the-loop → AI makes decisions autonomously

Most serious policymakers and military ethicists agree the last category is dangerous.

Why Full AI Control Is a Bad Idea

1. Accountability breaks down

If an AI system makes a lethal mistake:

  • Who is responsible? The programmer? The commander? The machine?

  • Legal frameworks (like the United Nations laws of armed conflict) assume human judgment

Without accountability, deterrence and justice weaken.

2. AI lacks moral judgment

AI can:

  • Optimize targets

  • Predict outcomes

  • Process data faster than humans

But it cannot truly understand:

  • Civilian harm vs. military necessity

  • Ethical nuance

  • Long-term political consequences

War is not just tactical—it’s moral and political.

3. Risk of catastrophic errors

AI systems can:

  • Misidentify targets

  • Be fooled by adversarial inputs

  • Act unpredictably in novel situations

In warfare, even a 1% error rate is unacceptable when lives are at stake.

4. Escalation risk (machines move too fast)

Autonomous systems could:

  • Respond instantly to perceived threats

  • Trigger retaliatory strikes before humans intervene

This creates a “flash war” problem—conflicts escalate faster than diplomacy can react.

5. Hacking and manipulation

Any AI system:

  • Can be hacked

  • Can be spoofed with false data

If an adversary manipulates inputs, they could trick AI into making lethal mistakes.

Where AI Should Be Used

AI is extremely valuable in non-decision roles:

  • Intelligence analysis (satellite imagery, signals)

  • Logistics and supply chain optimization

  • Defensive systems (e.g., missile interception with human oversight)

  • Scenario simulation and war-gaming

Think of AI as:

A very fast analyst—not a commander

The Middle Ground: “Human-in-the-Loop”

The most widely supported model:

  • AI recommends actions

  • Humans must approve lethal force

  • Humans can override at any time

This preserves:

  • Accountability

  • Ethical judgment

  • Strategic control

Real-World Direction

Organizations like the United Nations and various defense departments are actively debating limits on:

  • Autonomous weapons

  • “Lethal Autonomous Weapon Systems” (LAWS)

There is no global consensus yet, which is part of the risk.

Bottom Line

AI should augment, not replace, human decision-making in war.

If you hand full control to AI, you’re effectively:

  • Removing moral responsibility

  • Increasing the chance of accidental escalation

  • Creating systems you may not fully control

If you want, I can go one level deeper—e.g., how the U.S., China, and others are actually deploying AI in military systems today (it’s more advanced than most people think).

Response from Claude

Should AI control warfare decisions? 

No. And I'll give you both the short answer and the serious one.

The short answer

Warfare decisions involve moral accountability, contextual judgment, and political legitimacy that cannot be delegated to machines. The decision to take human life — individually or at scale — requires a human being who can be held responsible for that decision. An algorithm cannot be court-martialed, tried for war crimes, or held accountable in any meaningful sense.

The longer argument

The question has several layers worth separating:

AI as a tool in warfare — already happening and largely unavoidable. Targeting assistance, logistics optimization, threat detection, cyber operations. This is a different question from control.

AI making autonomous lethal decisions — this is where serious objections begin. The core problems:

The accountability vacuum. International humanitarian law — the laws of armed conflict — requires that someone be responsible for targeting decisions. If an autonomous weapon kills civilians, who is criminally liable? The programmer? The commanding officer? The manufacturer? Nobody has a good answer, which means the accountability that deters atrocities disappears.

The context problem. Warfare is saturated with contextual moral judgment that current AI cannot perform. Is that person a combatant or a civilian? Is that building a military target or a hospital? These require interpretation of intent, situation, and proportionality that machine vision and pattern recognition cannot reliably execute.

The escalation problem. Human decision-makers have fear, hesitation, and political consequence working as natural brakes on escalation. AI systems have none of these. Two autonomous weapons systems engaging each other could escalate to catastrophic levels in milliseconds with no human able to intervene.

The adversarial manipulation problem. AI systems can be spoofed, hacked, and deceived in ways human commanders cannot. An enemy who understands your autonomous weapons system's decision logic can potentially trigger it deliberately.

The strongest counterargument

Proponents argue that AI could actually be more ethical than humans — it doesn't get angry, tired, or vengeful. It won't commit battlefield atrocities out of rage or fear. In theory, a well-designed system might make more proportionate targeting decisions than a traumatized soldier.

This argument has surface appeal but fails on the accountability and legitimacy grounds. Even if the AI makes technically "better" decisions by some metric, the absence of human moral agency in lethal decisions is itself a violation of something fundamental about how we organize political violence and its consequences.

Where this is actually heading

The realistic near-term trajectory is not full AI control but human-machine teaming with progressively shorter decision loops — AI narrows options, humans authorize. The danger is that time pressure gradually makes human authorization nominal rather than real. That's the slow slide worth watching.

The international community has been trying since 2014 to negotiate a treaty on Lethal Autonomous Weapons Systems at the UN Convention on Certain Conventional Weapons. Progress has been minimal, largely because the major military powers — US, China, Russia — don't want to constrain themselves.

The bottom line

AI should inform warfare decisions. It should never control them. The line between those two things is eroding faster than the policy frameworks designed to hold it.

Sonnet 4.6

Extended

**********