Your Terrifying Techno-Fascist Quote of the Day

“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.”

Ronald Arkin, Georgia Institute of Technology–

What’s the man talking about? Autonomous drones: dumb metal programmed by fallible humans to wage a more merciful war. (There’s no such thing. Even Star Trek figured that out.)

There is a fundamentally anti-human belief that we can program an ethical machine that will coldly evaluate a situation and always make the right choice, unlike these icky meat sacks and their faulty programming. Humans, in this evaluation, are just bad code. Remove them from the loop, and all will be well.

Professor, let me introduce you to Lieutenant Colonel Stanislav Yevgrafovich Petrov, courtesy of Leah Libresco, who declined to annihilate the planet despite overwhelming (and false) evidence that this would have been the proper course of action. The computer would have launched. The human–tempered by human judgment and mercy–did not.

Obama’s drone war is already one of the most horrific, merciless, cold, inhuman war crimes of our time. Automation wouldn’t make it any better. Giving drones the power and authority to kill–removing the human from the decision loop (something an officer once told me would never, ever happen)–is madness to the nth degree.

Professor Arkin is an expert on the subject of autonomous lethality in robots. I would suggest that this is nothing for which we need experts. We need to say: “Okay, no. We don’t program robots with that capability, whatever short-sighted and spurious reasons you care to cook up to the contrary.” We would be better without any robots at all than with even one programmed with the capacity to kill. Robots aren’t actually necessary, and humanity can do just fine without them. You don’t need to fear a world without robots. You need to fear a world with people who feel robots can be more “ethical” than humans. You need to fear a world where morality has collapsed so completely that an elite feels the need to restore that morality through machines. A machine is incapable of being a moral agent.

About Thomas L. McDonald

Thomas L. McDonald writes about technology, theology, history, games, and shiny things. Details of his rather uneventful life as a professional writer and magazine editor can be found in the About tab.

  • http://moralmindfield.wordpress.com Brian Green

    Complete agreement here! While they are at it, I’ve heard about this great idea for a supercomputer that can be put in charge of all of our military activities. It’s called Skynet.

    Do these people never read speculative fiction or watch SF movies? Not that fiction ought to guide policy, but it at least might give a few hints about bad ideas… Ethics doesn’t work if you have no imagination about what your choices are or what the results of those choices might be. Not that his needs to be argued consequentially either, there ought to be some kind of basic realization on the part of these reasoners that letting artifacts kill people without human judgment at the moment of decision is contrary to human dignity. It reduces us to worthless machines-cogs that other machines can just remove, by lethal force.

    Giving the autonomous drone only non-lethal weapons might be an interesting intermediary case, but I would still oppose it because the step from non-lethal to lethal weaponry is one of degree, not kind.

  • http://www.godandthemachine.com Thomas L. McDonald

    If we’re talking about a surveillance system pre-programmed to fly a pattern, do its work, and return, with some ability to react to variations in flight conditions, then I don’t have a problem with that. They’re already doing it. But you are correct: even arming one with non-lethal weapons would be too big a step, and for what possible gain? We simply distance humans that much further from the cost and consequences of war, and rely on machines to do things an elite has decided we are no longer capable of doing ourselves. Our confidence in our machines will be our undoing.

  • Pingback: Obama Understands Something Fundamental About the Millennial American

  • Tim Jones

    This is why ethics are too important to leave to the experts and “professional” ethicists. Ethics and moral decisions made on behalf of a society should reflect the ethics and morals of that *entire* society, not just a cadre of elites (who always – ALWAYS – have an agenda). These are decisions we must make together, every one of us. The moral opinion of every waitress, cab driver and construction worker should count as much as any scientist or bureaucratic functionary.

    Do we give robots the authority to kill? No. Full stop. It doesn’t matter what kind of tortured calculus the CIA, the Chiefs of Staff or anyone else cooks up in their hothouse. They work for us.

    “No”, if every secret lab has to be pulverized… “No”.

  • elGaucho

    I am not sure how he can say a robot can make a more ethical decision than a human. All it does it execute an algorithm, be it AI or other. The way in which this algorithm is designed would seem to fit the question better of what is more ethical. But really it is all just a red herring. There isn’t or should not be a technical, applied philosophy, sort of, which is the more ethical thing to do and then weight costs. A life is a life. And of course, the man who produces such a robot would never ask whether use of a robot that precludes the loss of human life but provides the loss of enemy life might just make war easy enough that it becomes the default solution to diplomacy.

  • victor

    For his troubles, Petrov was reprimanded and forced into retirement. This is a documentary I really want to see: http://www.logtv.com/films/redbutton/video.htm

    But, yes: meat sacks unite!

  • Linebyline

    Distance, you say? Exactly. I suspect that at least part of the reason for this push is so that people can blame disasters on failures of the technology (which can be fixed if we just put a few more million dollars we don’t have into R&D), and not negligence or malice on the part of the operators. Maybe that’s just me being cynical?

  • Doubledad

    Agree completely. As an Air Force Officer that coordinated with drone units, I was always concerned how remote warfare could make it that much easier to insulate warriors from the consequences of their actions. It is much easier to convince yourself to kill when humans look like blurred images on a video game. Better still, why not have the machine make the decision for me so that I can ease any sense of responsibility.

    Some things you can’t delegate.

  • http://textsincontext.wordpress.com/ Michael Snow

    What I find most chilling is the day when Christians talk more about an ethical calculus than about imatatio Christi. As Americans, we have come a long way from the early Christians who “could not bear to see a man put to death, even justly” and the convictions of men like Moody: “There has never been a time in my life when I felt that I could take a gun and shoot down a fellow-being. In this respect I am a Quaker.”
    http://www.amazon.com/Christian-Pacifism-Fruit-Narrow-ebook/dp/B005RIKH62/ref=zg_bs_158546011_92

  • c matt

    Do these people never read speculative fiction or watch SF movies? Not that fiction ought to guide policy,

    In many ways, it should – SF allows one to explore the consequences of proposed technology BEFORE it goes on-line, so to speak. Have these fools never watched Battelstar Galactica!?!

    The machine may never be a moral agent, but it is always the agent of a moral agent. Someone has to program it and turn it loose.


CLOSE | X

HIDE | X