Robot Ethics CFP

Robot Ethics CFP August 19, 2015

Via the Religion Call for Papers Tumblr I learned about this volume, which I should perhaps consider submitting an abstract for, since I have published on the topic in the past. But I have other projects I am committed to completing before the end of the year, and so am still unsure if I will. But I suspect that blog readers may be interested in it.

Robot Ethics 2.0 (MIT Press)

Abstract deadline: 10 Sept 2015 (and ongoing)
Paper deadline: 1 Feb 2016

We are putting together an edited book on robot ethics, to be published by MIT Press. 

This is a follow-up to our previous volume, Robot Ethics, and will cover more recent developments in the field, including:

· Autonomous cars
· Space, underwater, other environments
· Entertainment and sports
· Cyborgization and enhancement
· Personal/gender identity
· Security, crime, and abuse
· Environment
· Risk and uncertainty
· Programming and design strategies
· Privacy and other law
· Sex and psychology
· Deception and artificial emotions
· Human-robot interactions
· Medical, health, and personal care
· Labor and unemployment
· Internet of Things
· Artificial intelligence and robotics
· Religion
· Other related topics.

As with our first volume, we already have several prominent roboticists and technology ethicists as prospective contributors, but we’re always happy to hear about new research. 

Technical papers should not be too technical (and/or technical parts should be very clearly explained or footnoted), as this volume is designed for a broad audience.

If you are able to deliver a new paper (5,500-7,500 words) for this anthology by February 1, 2016, we’d be happy to consider an abstract (or full paper), which may be emailed to Dr. Patrick Lin at 

To receive full consideration of your abstract, please submit by September 10, 2015. We expect to notify authors with a decision by September 30.

Please include your current affiliation or a short biosketch, including contact information, with your abstract. All submissions must be original and not under consideration elsewhere. Feel free to distribute this call for papers to colleagues.

This CFP can also be found here:

Thank you!

The editors:
Patrick Lin, George Bekey, Keith Abney, and Ryan Jenkins

Patrick Lin, Ph.D.
Director, Ethics + Emerging Sciences Group
California Polytechnic State University
Philosophy Department
Bldg. 47, Room 37
San Luis Obispo, CA 93407

"In an off-beat sort of a way, this reminds me of someone's line (Steve Punt, ..."

God as Parent
"When I read the title of this blog, it made me think of Stonehenge as ..."

Neolithic Robots
"I think immersive role playing is an awesome way to learn a language. I had ..."

Direct and Indirect Learning Through Games
"I never thought about it before, but Paul stressing Jesus was of David's line is ..."

Genealogies and the Age of the ..."

Browse Our Archives

Follow Us!

TRENDING AT PATHEOS Progressive Christian
What Are Your Thoughts?leave a comment
  • Gary

    I find this interesting, ethics versus trust. When I used to work for the military, a weapons system many times has an auto mode. Specifically, AEGIS has an auto mode, and can detect, classify, and identify an air contact. And automatically shoot it down with no human interaction. I also remember a robot vehicle with a 50 cal machine gun mounted on it, that could be put into auto mode, to shoot with no human interaction. However, no one ever used these modes in the real world, since no one really trusted them to kill with no errors. We have enough errors with humans in the loop. So computer ethics is a long way from being realized, since we don’t even have trustworthy software, to leave the human decision maker out of the loop yet. Although, I also remember the army working on a bomb to be released over a battlefield, that then breaks up into many individual bomblets, with each one smart enough to seek out a target that resembles an armed vehicle to be destroyed. Maybe ethics isn’t a consideration in military systems. Only fear of fratricide.

  • Phil Ledgerwood

    I hope they address the ethics of robotic phone calls on behalf of political candidates, and especially whether or not this provides an ethical basis for electing robotic politicians (Bob Dole – looking in your direction).

    • Gary

      I don’t know…I’d give Dole a break. A wounded WW2 veteran with no prostate.
      Now Trump…ultimate salesman. If the robotic phone message center was smart enough, we might see a “Vote for Trump, and you will receive an all-expense paid 3-day trip to Las Vegas, and a free round of golf at Trump resort. You just need to agree to attend a 2 hour seminar about buying a Trump Time-Share in Vegas!”
      Now here, we might have some ethics problems! Heck, I might vote for him. Especially if he throws in some free buffet tickets.

  • I listened to a podcast of StarTalk in which Neil DeGrasse Tyson had a conversation about robots with roboticist Stephen Gorevan of Honeybee Robotics. At first the conversation centered on how to define a robot – basically a mechanical device capable of carrying out autonomous or semi-autonomous activities.

    The discussion turned toward Asimov’s three laws of robotics, and Gorevan made a disturbing observation. One of the most complex robots in use today, military drones, are actually designed to break Asimov’s laws.

    • Gary

      AEGIS ships are more complicated. But both could be fully automatic, so I would consider them both robotic. But both have a human in the loop because their purpose (at least end result), is to kill specific people. And no one really trusts them to avoid fratricide, blue-on-blue, or blue on neutral. Heck, I can’t even fully rely on my Windows computer to operate the way I want. How can we expect a machine to kill people without making mistakes. So military robotics is totally unrelated to asimov’s law.

      As a note, I know people get upset when I say this, since I have the utmost respect for the military… A Navy CDR (that I worked for), that was the head of a carrier air wing and F-14 pilot, was giving a presentation to a joint group program. An Air Force Colonel was getting on his case about something (the Air Force Colonel was a program manager – a job category that is specific to the Air Force – paper pusher. The Navy, on the other hand, place fighters into program management positions on a rotating basis, and are not dedicated paper pushers). The Navy CDR got pissed off, slammed his pointer onto the table, and said (Damn it, I get paid to kill people. I don’t get paid to push paper).

      I never forgot that. The military does a good job at what they do. But never forget what they get paid to do.