I liked when agent Ellison pointed out that the programmers and creators had failed to give the AI a code of ethics, had failed to teach it to value human life. And I was disappointed when, given the opportunity to tell what he would teach the machine if he could, Ellison replied “If you want to teach it commands, start with the first ten”. It was a clever throw away line, but I fear it represents more than that, the notion (popular in the United States, particularly among people who don’t know the contents of the ten commandments) that learning morality means learning the ten commandments.
I disagree. I don’t think having a rule to not kill imposed on a person or a society is the same thing as teaching an individual or a group to value human life. And it is the mystery of how we instill that value that remains elusive. For we still understand so relatively little about how the human mind works, how we learn, and how best to teach our values, that it is still far from straightforward to answer the question of how to teach an AI to value human life. We still don’t know how to teach that value to human beings effectively and consistently.
It is Terminator 2 that holds out the most hope in this regard, by reversing the order. At the end it suggests that if a machine can come to learn the value of human life, then perhaps so can we. Then again, while this certainly was meant to be hopeful, perhaps it also presents the challenge: so many of us desire to spread a perspective that values human lives, but in seeking to accomplish this aim, we often seem to have no more idea how to go about it, than we would have in trying to teach a machine. That is one of the greatest challenges humanity faces, for unless we can figure it out, then even if our machine creations don’t kill us, we human beings may kill one another.