Algorithmic, Fully Autonomous Killing is a Trap for the Power Using It

Setup

A recent report says the Pentagon is closer to approving autonomous drones killing human targets, a decision that would likely backfire in ways that do more damage than good to the United States. (This article does not address close range fully autonomous weapons systems on battlefields set apart from civilian populations and employed by combat troops on fixed battlefronts such as loitering munitions in those circumstances.)

Reasons for Skeptical Opposition to Fully-Autonomous AI-Controlled Lethal Weapons

Fully-autonomous, algorithmic killing runs on programming that lacks the fullness of realtime, recent, and human experience local to the target. Such programming can err in targeting decisions that lack the human-experiential context, realtime interfaces with the war’s wider strategic timelines, local situational contexts, or comparison of the present with past situational contexts.

Fully autonomous AI weapons without integrated human senses, recall, perception, memory, emotion, logic, multi-factor experience, detail-recall, pattern-spotting, warrior ethos, and trace-perception will heighten blind-spot risk. This risk includes killing innocent civilians.

Algorithmic killing may also lack adaptive perception of enemy combat-deception, and strategic cost-benefit appreciation in the timing of “the kill,” or in case of errors in the details of the intelligence behind the strike, “the manslaughter.” And it is in the changing details the devil dwells.

The potential for an increase in machine-blindness to developing aleatory uncertainty by virtue of the program lacking human capabilities of realtime adaptation (that humans do not even fully understand in themselves) makes rapidly changing contexts outside the machine’s awareness fraught with risk of negative consequences for security, defense, and clearly defined victory.

The autonomous machine may lack awareness of the enemy’s awareness of its own attack record. Such machine blindness increase chances of successful, adaptive enemy ruses using civilians as decoys or shields. This risks strategically-disastrous manslaughter incidents sparking enmity against the United States.

Non-lethal or Life Protecting Autonomy Not as Problematic

The quest for the ease of machines autonomously doing non-lethal or even lifesaving things for human beings in other areas of civilized life is not so problematic. If a machine autonomously warns a citizen not to do something that could be dangerous when lacking the civilian’s context-awareness at the time, it is less likely to result in loss of innocent life.

However, in the authorization of autonomous, robotic, lethal action, error can cost far more on multiple levels in addition to the loss of one undeserving targeted person. What does that do to us? It galvanizes internal discord by honorable U.S. combat veterans against the policymakers willing to risk unnecessary evils while galvanizing the world against the U.S..

Non-lethal Autonomous Engagement

Likewise, non-lethal autonomous engagement of a possible target may bring about behaviors that create the greater targeting certainty originally sought, so that a secondary lethal response could be triggered. Yet this implies human evaluation of the initial autonomous engagement.

For example, a drone that confronts and fires blanks or rubber rounds to draw a response from local hostile forces could yield realtime intelligence about the location of intended targets after which follow-on autonomous systems could use lethal or non-lethal means to subdue or neutralize the combat capability of the flushed targets. Yet even this brings risks of non-combatants trying to kill a robot with self-defense arms and not trying to kill U.S. troops. Such may get pulled into a combat situation when a hardened enemy deputizes, coerces, and / or sets them up for it.

The fluid complexities of precise targeting of human individuals or groups will be difficult for programmers or programming in fully autonomous systems to anticipate, whereas with ‘humans-in-the-loop,’ human and algorithmic intelligence can be combined for greater certainty of intended strategic, victory-relevant results.

Some say machine learning may correct for negative incidents in the future, however, once the damage is done and the world believes the U.S. is implementing a “SkyNet” horror, you will have dictators and terrorists holding hands with autocratic leaders around the world and misidentifying the United States as the unipolar menace to take down. The same would be a terrorist recruiting tool the world does not need. Unintended consequences of weapons autonomy could trigger tipping points against the entire purpose, meaning, and leadership role of the United States in its foreign policy and defense.

Upshot

In some exceptional cases, sending an autonomous weapons system out to kill an enemy or terrorist target could tip the tactical balance in a battle with central strategic importance for shortening the war.

Probabilities of risk and reward would become essential, and if such could be calculated in the field using AI and human-experience together, optimal. Such could become more important where great power enemies have managed to shut-down communications between deployed units with command and control, or when there is communications silence.

However, frequent use of fully autonomous weapons could seduce humans using it into a thinking process of the habitual impaired driver whose impaired self-perception of self-impairment is self-reinforced by past impaired trips without consequences. Thus the user knowingly departs, putting other drivers, self, and adversely impacted people related to all at elevated risk of catastrophic outcomes. This, by habitual trust in humanly-impaired algorithmic limits modified by self-coaxed human-impairment of judgement.

Fully-autonomous non-lethal neutralization of combat power or intent of enemies will reduce risks in fully autonomous machine missions in the main, with the above caveats and others outside the scope of this article considered. Non-lethal applications will aid where enemies choose civilian environments for civilian-shielding of combatants against more scrupulous, intelligent opponents.

Joint human-artificial intelligence would more often provide the operational adaptability, facility, and comprehension to keep human-robot interactions fresh, accurate, and adaptive so that it will be harder for enemies to adaptively use, abuse, or hack (whether digitally or by adaptation to use of a weapons system over time) our capabilities for enemy purposes.