Do you believe in the Singularity?

Do you believe in the Singularity? February 4, 2017

https://commons.wikimedia.org/wiki/File%3ATOPIO_3.jpg; By Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

According to Wikipedia, the (technological) singularity is defined as that moment in the future when “the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.”  The more everyday definition of the term, as I’ve seen it used over the past several years, is that point at which a computer/robot becomes so sophisticated in its programming as to become sentient, to have its own wishes and desires, and to ulimately, because those wishes and desires would be paired with superhuman abilities (whether physical strength or the hyperconnectivity of the internet).

And The Atlantic yesterday raised a question, “Is AI a Threat to Christianity?” — that is, because the rise of AI would challenge the idea of the soul.  If an artificial intelligence is sentient, does it have a soul?  If so, can it be saved?

Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides. The notion originates from the creation narrative in the biblical book of Genesis, where God “created human beings in God’s own image.” In the story, God forms Adam, the first human, out of dust and breathes life into his nostrils to make him, literally, “a living soul.” Christians believe that all humans since that time similarly possess God’s image and a soul. . . .

If you’re willing to follow this line of reasoning, theological challenges amass. If artificially intelligent machines have a soul, would they be able to establish a relationship with God? The Bible teaches that Jesus’s death redeemed “all things” in creation—from ants to accountants—and made reconciliation with God possible. So did Jesus die for artificial intelligence, too? Can AI be “saved?” . . .

And what about sin? Christians have traditionally taught that sin prevents divine relationship by somehow creating a barrier between fallible humans and a holy God. Say in the robot future, instead of eradicating humans, the machines decide—or have it hardwired somewhere deep inside them—that never committing evil acts is the ultimate good. Would artificially intelligent beings be better Christians than humans are? And how would this impact the Christian view of human depravity?

But it’s always seemed to me that the issue is more fundamental: it seems to me that the idea of the singularity, of sentient artificial intelligence with its own wishes and desires, is itself a matter of religious faith.

Fundamental to the idea of the soul is the idea that we have free will, the ability to choose whether to do good or evil.  Indeed, it seems to me that this is the defining characteristic that makes us human, or makes humans different than the rest of creation around us.  As I wrote in an old blog post,

Yet consider the case of a lion just having taken over a pride of lionesses, and killing the cubs so as to bring the lionesses into heat, and replace the ousted male’s progeny with his own. Has he sinned? Of course not. It’s preposterous. (I tend to use that word a lot.) But what of a human, say, a man abusing the children of his live-in girlfriend? Do we say, well, that’s just nature for you? No, we jail him.

The Atlantic author, Jonathan Merritt, posits a scenario in which a robot/artificially-intelligent being has no ability to sin, because of its programming.  This certainly seems to be a case in which this creation would not, could not have sufficient free will, decision-making ability, emotions, and desires to be considered a being with a soul.

But what about the scenario of a truly sinful AI?  Say, not Data, but Lore, Data’s evil twin in Star Trek?

And that’s where it seems to me that, if humans do create a form of AI that is able to make moral decisions, to act in ways that are good or evil, depending on the AI’s own wishes and desires, it would call into question the idea of the soul, of any kind of distinctiveness of humanity.  It would suggest that our decisions to act in ways that are good or evil are not really decisions made of our own free will, but a matter of our own programming.  And if a “soul” is really just a matter of immensely sophisticated “programming” — whether biological or technological — the very notion of the soul continuing after death seems foolish.

But we speak of “the singularity” as if it’ll inevitably happen — it’s only a matter of when.  And it seems to me that this conviction, that we, or our children, or our children’s children, will live in a world with sentient robots, whether a HAL or a Data, is itself a matter of belief, a religious belief, in which believers hold the conviction that advances in technology will mean that in one field after another, the impossible will become possible.  Sentient artificial life?  Check.  Faster-than-light travel to colonize other worlds?  Check.  The ability to bring the (cyrogenically-frozen) dead back to life?  You got it.  Time travel?  Sure, why not.  And, ultimately, the elimination of scarcity and the need to work?  Coming right up!  Sure, there is no God in this belief system, except that technology itself becomes a god, not in the metaphorical sense of “something we worship,” but instead something people hold faith-like convictions in, that shape their worldview.

 

Image:  https://commons.wikimedia.org/wiki/File%3ATOPIO_3.jpg; By Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

 


Browse Our Archives