The use of drones to kill suspected terrorists is controversial, but so long as a human being decides whether to fire, it is not a radical shift in how humanity wages war. Since the first archer fired the first arrow, warriors have been inventing ways to strike their enemies while removing themselves from harm's way.
Soon, however, military robots will be able to pick out human targets on the battlefield and decide on their own whether to go for the kill. A 2011 Defense Department roadmap for ground-based weapons states: "There is an ongoing push to increase autonomy, with a current goal of 'supervised autonomy,' but with an ultimate goal of full autonomy."
Science is catching up to fiction. And one doesn't have to believe the movie version of autonomous robots becoming sentient to be troubled.
The decisions ethical soldiers must make are extraordinarily complex and human. Could a machine soldier distinguish as well as a human can between combatants and civilians, especially in societies where combatants don't wear uniforms and civilians are often armed? Would we trust machines to determine the value of a human life, as soldiers must do when deciding whether firing on a lawful target is worth the loss of civilians nearby? Could a machine recognize surrender? Could it show mercy? And if a machine breaks the law, who will be held accountable?
Some argue that these concerns can be addressed if we program war-fighting robots to apply the Geneva Conventions, and that they would never act out of panic or anger or a desire for self-preservation. But most experts believe it is unlikely that advances in artificial intelligence could ever give robots an artificial conscience, and even if that were possible, machines that can kill autonomously would almost certainly be ready before the breakthroughs needed to "humanize" them. And unscrupulous governments could opt to turn the ethical switch off.
Tyrants cannot always count on human armies to do their bidding. But imagine Syria's Bashar Assad commanding autonomous drones programmed to kill protest leaders or to fire automatically on any group of more than five people. He would have a weapon no dictator in history has had: an army that will never refuse an order, no matter how immoral.
Nations have banned classes of weapons — chemical, biological and cluster munitions; landmines; blinding lasers. It should be possible to forge a treaty banning offensive weapons capable of killing without human intervention, especially if the United States, which is likely to develop them first, takes the initiative. A choice must be made before the technology proliferates.
Tom Malinowski is Washington, D.C., director at Human Rights Watch.