Putting “ethical governors” on killer robots

Putting “ethical governors” on killer robots July 18, 2014

Drone warfare makes some people squirm for the ethical issues it raises, but right now drones are still controlled by human beings.  The upcoming technology, though, would make them autonomous, allowing them to make their own “decisions” about whether or not to kill.  To meet the moral objections in giving machines the option to kill human beings, some techies are proposing tacking on separate software they are calling  “ethical governors” that could automatically run the decisions through international law protocols before going lethal.

What do you think about this?  Can there be “artificial morality'” just as there is “artificial intelligence”?  (After the jump, a defense of killer robots that goes into these issues.)

From Erik Schechter: In Defense of Killer Robots – WSJ:

Certainly the idea of killer robots is unsettling, and proceeding with caution is a good idea—so long as we don’t completely stop exploring this technology. Autonomy is already a feature of the modern military. Experimental drones can now fly themselves, pick out their own landing zones and travel in mutually supporting swarms. A logical next step is to give these unmanned systems the power to fire on their own, delivering weapons on target faster and with greater precision than a human ever could.

The argument for the ban goes like this: War, though mind-shatteringly nasty, is still not a moral free-for-all. Under international humanitarian law, we expect combatants to do their best to spare the lives of civilians. In the pre-al Qaeda days of uniforms, insignia and organized front lines, it was easy to distinguish legitimate from illegitimate targets. But in our age of amorphous militants and low-intensity conflict, making that distinction is often challenging for a human soldier, the critics say, and it would be nearly impossible for a robot.

Adhering to international humanitarian law also means making moral judgments under chaotic conditions. A machine will never be able to assign a value to, say, bombing a bridge and weigh its strategic importance against the cost borne by the local population. Equally problematic, machines lack basic human empathy. So an autonomous robot would respond to a 12-year-old holding a weapon very differently than a soldier would. . . .

Autonomous weapons systems of the near future will be assigned the easy targets. They will pick off enemy fighter jets, warships and tanks—platforms that usually operate at a distance from civilians—or they will return fire when being shot at. None of this is a technical stretch. Combat pilots already rely on machines when they have to hit a target beyond visual range. Likewise, some ground-combat vehicles have shot-detection systems that slew guns in the direction of enemy fire (although we’d probably want a robot to rely on something more than acoustic triangulation before unloading).

As for the moral judgment objection, machines may not have to be philosophers to do the right thing. Ron Arkin, a roboticist at the Georgia Institute of Technology, posits a scenario in which a human commander determines the military necessity of an operation; the machine then goes out and identifies targets; and right before lethal engagement, a separate software package called the “ethical governor” measures the proposed action against the rules of engagement and international humanitarian law. If the action is illegal, the robot won’t fire.

Mr. Arkin’s ethical-governor concept has been met with much skepticism. But let’s assume for the moment that warbots, unhampered by feelings of fear, anger or revenge, can outperform human soldiers in keeping the rate of civilian casualties low. (We’ll know for sure only if such a system is developed and tested.) If the goal of international humanitarian law is to reduce noncombatant suffering in wartime, then using sharpshooting robots would be more than appropriate, it would be a moral imperative.

Anticipating this utilitarian argument, disarmament activists contend that, real-life consequences aside, it is inherently wrong to give a machine the power of life or death over a human being. Killing people with such a self-propelled contraption is to treat them like “vermin,” as one activist put it. But why is raining bombs down on someone from 20,000 feet any better? And does intimacy with one’s killer really make death somehow more humane?

Another, related objection goes to the issue of responsibility. Predator drones, activists note, have a human crew (albeit one ensconced in an air-conditioned trailer stateside), so there is someone to blame if something goes wrong. But in the case of a fully autonomous system, who is liable for an unlawful killing? Is it the field commander? The software engineer? The defense contractor that performed the integration work? These are serious questions but are hardly showstoppers or even unique to killer robots. One could ask similar questions about injuries or deaths caused by self-driving cars.

"I think when anyone is using the power of their office to thwart the results ..."

DISCUSS: Presidential Immunity
"I believe that he believed, but I am perfectly willing to accept the possibility that ..."

DISCUSS: Presidential Immunity
"I'll bet the actual laws don't say that, but assuming you are talking about some ..."

Abortion Supply and Demand
"First, I do care "whether potus is given supreme executive power," whatever that exactly is. ..."

DISCUSS: Presidential Immunity

Browse Our Archives