New Article on Artificial Wisdom, i.e. the Intersection of Ethics and Computer Science

New Article on Artificial Wisdom, i.e. the Intersection of Ethics and Computer Science August 10, 2018

An article that my colleague Ankur Gupta and I wrote together, exploring the intersection of science fiction, artificial intelligence, wisdom, ethics, and religion, has appeared in a special issue of the journal Religions. The special issue is titled “So Say We All: Religion and Society in Science Fiction” and our article has the title “Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines.” Here’s the abstract:

The moral and ethical challenges of living in community pertain not only to the intersection of human beings one with another, but also our interactions with our machine creations. This article explores the philosophical and theological framework for reasoning and decision-making through the lens of science fiction, religion, and artificial intelligence (both real and imagined). In comparing the programming of autonomous machines with human ethical deliberation, we discover that both depend on a concrete ordering of priorities derived from a clearly defined value system.

Visit the journal’s website to read the whole thing online or download the article in pdf format.

If the article seems too long to keep your interest, the XKCD cartoon below about Asimov’s Three Laws of Robotics sums up one crucial point that we explore. But I really do hope that the cartoon will whet your appetite to want to see what we do with it, rather than seeming acceptable as a substitute for reading it!

The journal, being open access online, makes articles available as soon as they are through the peer review and editing processes. Ours was the first through that process, and I’m eagerly looking forward to reading what the other contributions turn out to be!

Elsewhere online, you can explore some recent articles related to at least some aspects of these intersections, if not always quite as many of the threads as Ankur and I sought to incorporate:

The Theology of Battlestar Galactica (A New Podcast)

An NSF grant to explore the ethics of driverless cars

Only a Game on self-braking cars (that’s the blog of the author of The Virtuous Cyborg, where you’ll find more on this subject)

Saving ignorance from AI in Nautilus

New Humanist on AI and common sense

Vox and IO9 on woke droids and more from the Star Wars universe

David Brin on science fiction scenarios

Bias as the real danger of AI

Catholic thinkers on the ethics of artificial intelligence

Gillian Whitaker shared the authors contributing to a collection themed around AI and robots

Android Soup for the Soul: How Robots Model Humanity

AI and emergence: An essential meld?

Imago Hominis

In the European Union, the decision was made recently to recognize robots as persons – in the same sense that corporations can be persons before the law.

Summer Special: Could a robot ever have a real human identity?

Lots from (or via) 3 quarks daily:

How Big Data Is ‘Automating Inequality’

Automating Inequality: How High Tech Tools Profile, Police, and Punish the Poor

Artificial Intelligence Is Infiltrating Medicine — But Is It Ethical?

Will humans ever conquer mortality by merging with technology?

To Build Truly Intelligent Machines, Teach Them Cause and Effect

It’s Time for Technology to Serve all Humankind with Unconditional Basic Income

We’re talking about “sex robots” now

The meaning of life in a world without work

What if the Government Gave Everyone a Paycheck?

How a Pioneer of Machine Learning Became One of Its Sharpest Critics

Inside Trends And Forecast For The $3.9T AI Industry

The Disruption Ecosystem

The Threat of AI Weapons

The New Yorker on how frightened we should be of AI

5 Great TED Talks on the Potential of Artificial Intelligence

A couple from Steve Wiggins:

Eternity, Technically

Wired for Good

New York Times article on whether there is a “smarter path” to AI

AI and wealth redistribution

Religious Studies Opportunities Digest – 5 June 2018

https://relcfp.tumblr.com/post/174156653006/boston-university-graduate-student-conference

Finally, let me link to a classic article by Robert Geraci about AI.

"I'm not saying your interpretation of "what" and "how" God is - is wrong. You ..."

Do God’s Ends Justify God’s Means?
"Your thinking appears to be black-and-white. The fact that a person tells one lie does ..."

Do God’s Ends Justify God’s Means?
"Which takes us back to: God lies (1 Kings 22:21-22), and Jesus lies (John 7:8-10). ..."

Do God’s Ends Justify God’s Means?
"Fiction certainly can be entertaining, but it is a poor guide to understanding the real ..."

Do God’s Ends Justify God’s Means?

Browse Our Archives

Follow Us!


TRENDING AT PATHEOS Progressive Christian
What Are Your Thoughts?leave a comment
  • John MacDonald

    Interesting article. I was reminded of the Bridge Officer’s Test in Star Trek TNG where only after ordering close friend Geordi La Forge to make a repair in an area where he would be exposed to fatal doses of radiation did Troi pass the test.

  • Neil Brown

    Loved the article, both the parts I agreed with, and the parts I didn’t (both the parts I understood and the parts I didn’t….)
    I confess that I find the “self driving car problem” seems to be massively over-thinking the problem.
    A crucial rule for all road users is to be predictable – you aren’t the only actor and you cannot predict other actors unless they are predictable, and so you must also be predictable.
    So rather than trying to encode moral values, I’d much rather encode predictability and put research effort into sensor technology so that the predictions have high quality data to work with. If I can trust other actors to be predictable, I don’t need to be nearly so clever myself (and please, keep the human off the roads, unless they are in tanks like the rest of us).

    • Ankur Gupta

      I think the point you bring up about predictability is precisely the reason *why* we have to talk about ethics of driverless cars. In other words, let’s suppose that it’s easy to design a machine that operates in an ideal world with rules that no one is allowed to break. However, currently that is not the situation we face. We have to develop algorithms to help us along the journey towards 100% utilization of autonomous vehicles. Assuming that the problem is trivial in that ideal domain is somewhat of a cop-out.

      Further to the point is that even if we were able to assume a “closed” and idealized system (where either a car is automated, or it’s not allowed), you still need to be able to deal with other unexpected circumstances, such as debilitating weather or unexpected breakdowns of roads or wildlife interactions like deer hits or Godzilla. And, in the scenario that those unexpected events cause a decision to be made, we have to think about this issue carefully as above.

      • Neil Brown

        Thanks for your thought. I exactly disagree (though I’m not an expert, so it doesn’t mean much).
        Predictability is precisely the reason why we must NOT have “ethics” in driverless cars. If they attempt to behave “ethically”, then they are less likely to behave predictably (because in any non-trivial situation, ethical choice is based on perception, and different actors perceive different things).