The United Nations on Thursday was dealing with a surprisingly pressing issue: killer robots.
In Geneva, the U.N.'s special rapporteur on extrajudicial executions, Christof Heyns, called for a moratorium on the development of drones that are programmed to target and fire without human intervention. “War without reflection is mechanical slaughter,” he said. “In the same way that the taking of any human life deserves at the minimum some deliberation, a decision to allow machines to be deployed deserves a collective pause, in other words, a moratorium.”
Such technology is not as far off as you might think. Missile defense systems like Israel’s Iron Dome and the United States’ Phalanx automatically detect, track, and shoot down incoming missiles. The U.S. Navy’s X-47B drone, launched successfully from an aircraft carrier last month, can execute its flight plan autonomously, with a human overriding it only if something goes wrong. In South Korea, the SGR-1 sentry robot targets people who enter the demilitarized zone and shoots them if a soldier back at base gives the command. Heyns is worried that soon the soldier could be taken out of the loop.
This raises some obvious questions, first among them: Why would anyone want to give robots such deadly power? Aren’t there thousands of movies where this goes horribly wrong?
In his report, Heyns notes that the pace of warfare has increased to the point that “humans have in some respects become the weakest link in the military arsenal.” We already let computers target and shoot incoming missiles because no human has the reflexes to do it. In aerial combat, fighter jets could potentially maneuver more quickly if it weren’t for the fragile organism in the cockpit that goes unconscious at high G-forces. Even drones are limited by their human operators: the time it takes for commands to travel from the operator to the satellite to the drone makes striking fast-moving targets difficult, and the communication link itself is vulnerable to hacking and jamming. All these advantages conspire to push militaries toward designing weapons with greater autonomy, even if no one sets out with the explicit intention of creating a killer robot.
The moratorium is an attempt to pause the slide toward autonomous weapons so we can think carefully about the consequences—because they’re potentially deadly.
“Autonomous systems interacting with each other competitively can escalate quickly and unpredictably,“ says Dr. Peter Asaro, a philosopher of science at the New School and co-founder of the International Committee on Robot Arms Control. Asaro spoke with Steve Goose of Human Rights Watch before the hearing in Geneva, telling representatives about their Campaign to Stop Killer Robots, a push to impose a land mine–style ban on the technology. Asaro points to the 2010 Flash Crash, when automated high-frequency trading algorithms caused the Dow to plummet 1,000 points in five minutes. Flash crashes would be far more dangerous with missiles, Asaro says.
Then there’s the tangle of humanitarian and legal problems autonomous weapons would create. Some people, like the roboticist Ronald Arkin, argue that killer robots would actually be more humane than human soldiers, because they’d never fire out of fear for their own safety and would never act out of vengeance or spite. True, robots don’t rape and torture, but they also don’t act out of compassion or grace, Heyns pointed out, nor do they operate well in situations that require a lot of context and social cues to understand. Could a robot distinguish between a soldier and a civilian, or between a combatant who is raising his rifle to fire and one who is raising it above his head to surrender? International humanitarian law requires that it do so. It also requires that harm to civilians be weighed against military advantage before an attack is launched; it’s hard to imagine a computer program sophisticated enough to make such complex value judgments.
But even if one could be programmed, sometime in the distant future, to comply with humanitarian law and military codes of conduct, what happens if the robot malfunctions and destroys a town? Who’s to blame? The programmer who designed it? The military commander who ordered the operation? The subordinates who were monitoring the robot’s actions, even if the robot was making decisions too quickly for them to intervene? The correct answer, Heyns, Asaro, and others worry, is no one. No one gave the order to fire, no one pulled the trigger, so no one is responsible. There’s no one to reprimand or bring before a court. There’s a “responsibility vacuum,” to quote the report.
That’s not just a legal concern, it’s a moral one. Every military innovation from longbows to drones has been accused of further detaching soldiers from the people they kill, thus making violence more palatable and war more likely, but autonomous weapons threaten to make that detachment complete, delegating decisions about whether to use a weapon to the weapon itself. Someone along the chain of command should have to think about the decision to kill and take responsibility for it; outsourcing that decision to a machine, Heyns says, “dehumanizes armed conflict even further.” It could also result in more conflict. If no one is responsible for an autonomous weapon’s actions, even in the attenuated form of feeling a pang of conscience when pressing a button, then the bar for using them could become dangerously low—they could be used with impunity, in Heyns’s words.
Heyns wants the U.N. to convene a panel to study robotic weapons and work out an international framework to regulate them. In the meantime, he recommends states implement their own moratoria on the technology. For understandable reasons, Pakistan was the most vocal nation in calling for not just a moratorium, but an outright ban on the technology. The European Union was more cautious, calling the report “interesting” and signaling that they'd be open to some sort of U.N. panel on the matter. The U.S. echoed the EU's statement, saying that it was important to “proceed in a lawful, prudent, and responsible manner,” but stopping short of calling for a moratorium on the technology.
The U.S. is actually ahead of most other countries when it comes to autonomous weapons policy. Last year the Department of Defense imposed a sort of moratorium, saying that for the next 10 years, the U.S. can develop fully autonomous weapons that use only nonlethal force. Though it can be overridden by high-level officials, Human Rights Watch says it’s a step in the right direction.
They would like to go further: an international ban, along the lines of the land-mine treaty, prohibiting the development and use of the autonomous weapons. (Jody Williams, who won the Nobel Peace Prize for her work banning mines, is also backing the campaign.) Asaro wants a fundamental principle: any decision to kill needs to be made by a human. At the very least, they want an international moratorium, and soon. Otherwise, as Heyns says, the current drift of research will continue, and “the matter will, quite literally, be taken out of human hands.”