ReligionProf Podcast with Ankur Gupta (Part 2): More on Robot Ethics

ReligionProf Podcast with Ankur Gupta (Part 2): More on Robot Ethics 2019-01-09T05:57:27-05:00

This week’s episode continues my most recent conversation with computer scientist Ankur Gupta. Ankur is surprisingly humble given some of the things that he has worked on. We have read Safiya Umoye Noble’s book Algorithms of Oppression, and have begun working on an article that interacts with it, and in the process I learned that he worked on the development of the very algorithm used in Google searches of the sort that is the focus of Noble’s book!

I hope you are enjoying getting to hear Ankur and I chat about our project about computing, ethics, and religion. As you’l have heard in last week’s episode, Ankur and I are engaged in a conversation about valuing, computation, and moral reasoning. Richard Beck wrote something recently that seems to apply directly to the challenges facing attempts to program machines to act ethically:

Naked reason can guide ethical reflection and deliberation. Reason is vital when we face ethical quandaries and predicaments. But naked reason needs to know what, ultimately, we ethically care about. We need to know 1) what we value and 2) how those values rank against each other when they come into conflict. Otherwise, how could you ever make an ethical decision? All you’d be able to do is make long lists of ethical pros vs. cons. Remember those frustrating debates in your college Ethics 101 class? All those interminable ethical debates are just like those patients with damage to the frontal cortex. The conversation and debate never ends. And yet, we have to make moral choices in life. So how to choose? We just have to take some goods as given and/or more important than other goods. And rationality itself can’t make that call. There are many rational conclusions to ethical debates. Rationality is just a computational tool. Reason can’t tell you, in the end, what to care about. Just sit in on an ethics class.

All that to say, when I describe morality as being metaphysical I’m talking about how rationality is separate from the values we have to input into the system.

These values aren’t the product of reason, they make reason possible.

In the same way, machines need to have something they value first – something programmed into them – which they can then learn to apply and strive for with increasingly greater precision.

Other posts in Beck’s series about metaphysics and ethical reasoning will likely also be of interest to you. See also the interview with Adam Rutherford in which he says, “There are limits to the use of scientific explanations of things we value.” Also of interest is George Dyson on digital-analog hybrids. But most directly related to our research area are the article in Harvard magazine about pitfalls of machine learning, and Vance Morgan’s article, “Alexa, Hear My Prayer.” See too “A Future Without Boredom” on robots and the future of work.


Browse Our Archives