Legal rights for robots as “electronic persons”

I_Robot_-_RunaroundA committee of the European Parliament has passed a measure that would give legal rights to robots, classifying them as “electronic persons.”  It also imposes obligations, such as liability for any damages they might be responsible for.  The report also says that robots must not be made so as to appear “emotionally dependent” and must have a kill switch, should they go rogue.
That the committee is thinking in science fiction terms is evident in its implementation of Isaac Asimov’s Laws of Robotics, which he developed in his I, Robot series:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The committee measure says, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm,” while allowing robots the right to defend themselves as long as this rule is not violated.  The measure specifically says that developers must follow Asimov’s laws.

The entire European Parliament will vote on the measure in February.  For the entire document in English go here. [Read more…]

Why artificial intelligence won’t conquer humanity

Some smart people, from Bill Gates to Stephen Hawkings, have been raising the alarm that computers might get so intelligent that they could conquer the human race.  But artificial intelligence specialist David W. Buchanan explains why this isn’t something we need to worry about, saying the alarmists are committing the “consciousness fallacy,” confusing intelligence with consciousness. [Read more…]

Putting “ethical governors” on killer robots

Drone warfare makes some people squirm for the ethical issues it raises, but right now drones are still controlled by human beings.  The upcoming technology, though, would make them autonomous, allowing them to make their own “decisions” about whether or not to kill.  To meet the moral objections in giving machines the option to kill human beings, some techies are proposing tacking on separate software they are calling  “ethical governors” that could automatically run the decisions through international law protocols before going lethal.

What do you think about this?  Can there be “artificial morality'” just as there is “artificial intelligence”?  (After the jump, a defense of killer robots that goes into these issues.) [Read more…]