Would You Vote for a Machine for President?

Would You Vote for a Machine for President? September 29, 2016

Sometimes my class on Religion and Science Fiction seems to be incredibly practical, despite the impression the title might give.

Yesterday we talked about whether it would be a good idea to ask an artificial intelligence whether God exists.

The conversations we had were great, and included a contribution from Siri which led to one student being dubbed a “telephone evangelist.”

Some of you may remember me blogging about a movie I saw in 2014 at Gen Con called “The God Question.” You can purchase and view it on Amazon. It is about precisely this question: what would happen if a supercomputer were set to work on the question of whether God exists? Here’s the trailer:

Towards the end of class, I talked about upcoming topics. Next week one of the subjects is whether it would make sense for us to hand over the running of our society to a machine programmed to act in our best instance. Sci-fi stories such as “Return of the Archons” from the Original Series of Star Trek suggest that the answer is no. But the present election makes it seem like a more real issue, and so I think I’m going to ask students not just to consider the question in the abstract, but to discuss – and perhaps even role play a debate – that adds a robot as a third candidate alongside Donald Trump and Hillary Clinton. Who would they vote for, and why.

Who would you vote for, and why?

It seems I’m not the only person exploring this question, and so here are some depictions of robots as president or candidates:

Make America Afraid Again TOBOR for President pin Tobor for president Robot for President   r2d2-hopeRobot candidatePresidential milestones robot-20110722-103033

We will also be talking about whether rules like Asimov’s Three Laws of Robotics can keep us safe, and that is interesting to think about in conjunction with this image, which is not without its problems (to say the least), but nonetheless clearly sits at the intersection of religion and science fiction…

Programming Robots with the Bible


Browse Our Archives

Follow Us!


TRENDING AT PATHEOS Progressive Christian
What Are Your Thoughts?leave a comment
  • Brad Matthies

    I have it on good authority that Trump is an android. He’s running on a bootleg copy of Windows ME. This explain much.

  • Phil Ledgerwood

    I guess it would depend on who got to define “our best interests” as far as the machine’s programming were concerned.

  • David Evans

    We ourselves are notoriously bad at agreeing on our best interests. I suggest that if this ever becomes possible, the robots should be used as advisers only, and their recommendations should be publicly discussed. After all if the primary directive is “maximize human happiness” their response might be to put euphoric drugs in the drinking water.

  • Gakusei Don

    Isaac Asimov wrote some short stories about a robot pretending to be human who becomes President, as well as machines taking over humanity for humanity’s own good:
    https://en.wikipedia.org/wiki/Evidence_(short_story)
    https://en.wikipedia.org/wiki/The_Evitable_Conflict

    • Thanks for highlighting these!

      • Gakusei Don

        You’re welcome! One take away I had from Asimov’s “Zeroeth Law” for Robots was the idea that a small amount of harm could be allowed to humans, as long as it benefited humanity as a group. Whereas tight control of humanity for its own good was not a good thing.

        Coming from an atheist like Asimov, it sounds a lot like the theist solution to the Problem from Suffering: that a small amount of suffering is acceptable if it leads to a greater good. To me, they are as though CS Lewis had written stories about robots.

    • arcseconds

      I was thinking about this story because I was thinking about the Three Laws just because of this post. But somehow I never made the connection that the story was directly relevant…

  • bobyount

    Would the robot be “Sonny”, “VIKI”, HAL 9000, or some other newly fashioned AI, and would the robot have to adhere to Asimov’s Laws?

  • Ken Schenck

    I have actually thought during this election cycle that some of the AI bad guys in recent movies (e.g., iRobot) might actually be better for humanity. We don’t seem responsible enough to self-govern.

    P.S. I’ve also had thoughts on the conclusions AI will have on God when they get up to speed.

  • Brandon Roberts

    sure 😀

  • arcseconds

    I’ve always had problems accepting the idea of an artefact that can fairly be called an AI as it interacts with apparent understanding with humans, makes rational decisions, and even exhibits imagination in pursuit of its goals, but has those goals inflexibly determined by something called ‘programming’ which seems like it’s supposed to be pretty much the kind of programming we’re familiar with.

    I’ve never been really able to accept that something so sophisticated it has emergent properties like imagination could have it’s highest-level behaviour somehow dictated to directly by a ‘program’. You’d have to train it, raise it or maybe grow it to be like that, with all the uncertainty and open-endedness that would entail.

    So I’m afraid I don’t think ‘a machine programmed to act in our best interests’ is a coherent possiblity, for this and other reasons.

    Either it’s not the sort of thing that could genuinely understand what our best interests are, or it’s not the sort of thing that can be programmed to seek them.