Putting “ethical governors” on killer robots

Drone warfare makes some people squirm for the ethical issues it raises, but right now drones are still controlled by human beings.  The upcoming technology, though, would make them autonomous, allowing them to make their own “decisions” about whether or not to kill.  To meet the moral objections in giving machines the option to kill human beings, some techies are proposing tacking on separate software they are calling  “ethical governors” that could automatically run the decisions through international law protocols before going lethal.

What do you think about this?  Can there be “artificial morality’” just as there is “artificial intelligence”?  (After the jump, a defense of killer robots that goes into these issues.) [Read more...]


CLOSE | X

HIDE | X