On the horizon of military technology: Drones that “decide” on their own whom to kill:
One afternoon last fall at Fort Benning, Ga., two model-size planes took off, climbed to 800 and 1,000 feet, and began criss-crossing the military base in search of an orange, green and blue tarp.
The automated, unpiloted planes worked on their own, with no human guidance, no hand on any control.
After 20 minutes, one of the aircraft, carrying a computer that processed images from an onboard camera, zeroed in on the tarp and contacted the second plane, which flew nearby and used its own sensors to examine the colorful object. Then one of the aircraft signaled to an unmanned car on the ground so it could take a final, close-up look.
This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. Imagine aerial “Terminators,” minus beefcake and time travel.
The Fort Benning tarp “is a rather simple target, but think of it as a surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech Research Institute, which developed the software to run the demonstration. “You can imagine real-time scenarios where you have 10 of these things up in the air and something is happening on the ground and you don’t have time for a human to say, ‘I need you to do these tasks.’ It needs to happen faster than that.”
The demonstration laid the groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software. Once a match was made, a drone could launch a missile to kill the target. . . .
Research into autonomy, some of it classified, is racing ahead at universities and research centers in the United States, and that effort is beginning to be replicated in other countries, particularly China.
“Lethal autonomy is inevitable,” said Ronald C. Arkin, the author of “Governing Lethal Behavior in Autonomous Robots,” a study that was funded by the Army Research Office.
Arkin believes it is possible to build ethical military drones and robots, capable of using deadly force while programmed to adhere to international humanitarian law and the rules of engagement. He said software can be created that would lead machines to return fire with proportionality, minimize collateral damage, recognize surrender, and, in the case of uncertainty, maneuver to reassess or wait for a human assessment.
In other words, rules as understood by humans can be converted into algorithms followed by machines for all kinds of actions on the battlefield.
The article alludes to “ethical” and “legal” issues that need to be worked out with this particular technology. Like what? Is automating war to this extent a good idea? Does this remove human responsibility and guilt for taking a particular human life? Might this kind of technology develop, eventually, into a military without actual human beings in overt combat? How could this be abused?