Technology experts such as Elon Musk and scientists of the caliber of Stephen Hawkings are worried about Artificial Intelligence. The fear is that computer and online technology will develop to the point that the machines will become more intelligent than the human beings who made them. When that happens, machines may attain consciousness.
Those artificially-intelligent machines might make human life better than ever before. And they might take over the world, viewing humans as unnecessary and primitive parasites to be exterminated.
Ellen Duffer has written a provocative article entitled As Artificial Intelligence Advances, What Are its Religious Implications? in Religion & Politics.
She cites theologians and ethicists who are conducting “thought experiments” about the issues that might be raised by artificially intelligent machines.
For example, if robots and androids attain human-level intelligence and beyond, would they have “rights”? If human beings were made in God’s image, and robots are made in the human image, wouldn’t they be made in God’s image too, with all the rights and privileges thereof?
Should robots be baptized? Evangelized?
The robots will need to be programmed with some kind of ethical behavior, won’t they, so they won’t harm their human owners. (See science fiction novelist Isaac Asimov’s Three Laws of Robotics.) But one thinker cited in the article worries that Christian technicians, in the early days of artificially-intelligent robots, might go beyond “general ethical principles” to impose specifically “Christian values.” The implication is that this would be a bad thing, that Christians have no right to impose their own beliefs and values on machines.
But isn’t just the prospect of computers becoming like human beings that raises concerns. Might they become like God? Might computers attain “omni” status? That is, attain the omniscience, omnipresence, and omnipotent qualities of God?
If they did, it would surely be idolatrous to worship them (to which humans would be tempted) and to serve them (for which humans might not have a choice).
What about turning to the omnipotent machines to “save” us, to deliver us from death? Would that be idolatrous?
But here is the problem: Worries about Artificial Intelligence reflect a diminished view of human beings and the human mind.
Is it just “intelligence” that makes us human? Indeed, a long philosophical tradition stresses that we are “rational animals,” that what defines us is our reason.
But is it? The mind has many faculties. Reason and intelligence, yes, but also imagination, emotions, the will, and more. The imagination, the power to think in sensory images, itself includes a number of related abilities: memory, revery, the ability to conceive of the future (which we use in planning, daydreaming, worrying), creativity (the ability to conceive of things that do not exist, necessary not only for artistic creation but in virtually any kind of work, including the invention of computers and the capacity to conduct “thought experiments”).
And consciousness is not a function of any of these. Rather, it is the personal identity that looms behind them all, not only experiencing but also activating them all.
And beneath even our present consciousness is our fundamental center of identity: the soul.
If a machine attains artificial intelligence, would it have artificial imagination? artificial emotions? an artificial will? an artificial soul?
So even though extremely rapid calculation of programmed algorithms might eventually result in a simulacrum of “intelligence,” that would not result in consciousness, much less any approximation of the human.
But that we think that it would shows that we have already diminished human life, neglecting its complexity and its multiple-dimensions–including our spiritual identities–in favor of a simplistic, shallow, mechanical reductionism.
Illustration by geralt, via Pixabay, CC0, Creative Commons