–Ronald Arkin, Georgia Institute of Technology–
There is a fundamentally anti-human belief that we can program an ethical machine that will coldly evaluate a situation and always make the right choice, unlike these icky meat sacks and their faulty programming. Humans, in this evaluation, are just bad code. Remove them from the loop, and all will be well.
Professor, let me introduce you to Lieutenant Colonel Stanislav Yevgrafovich Petrov, courtesy of Leah Libresco, who declined to annihilate the planet despite overwhelming (and false) evidence that this would have been the proper course of action. The computer would have launched. The human–tempered by human judgment and mercy–did not.
Obama’s drone war is already one of the most horrific, merciless, cold, inhuman war crimes of our time. Automation wouldn’t make it any better. Giving drones the power and authority to kill–removing the human from the decision loop (something an officer once told me would never, ever happen)–is madness to the nth degree.
Professor Arkin is an expert on the subject of autonomous lethality in robots. I would suggest that this is nothing for which we need experts. We need to say: “Okay, no. We don’t program robots with that capability, whatever short-sighted and spurious reasons you care to cook up to the contrary.” We would be better without any robots at all than with even one programmed with the capacity to kill. Robots aren’t actually necessary, and humanity can do just fine without them. You don’t need to fear a world without robots. You need to fear a world with people who feel robots can be more “ethical” than humans. You need to fear a world where morality has collapsed so completely that an elite feels the need to restore that morality through machines. A machine is incapable of being a moral agent.