Is Strong AI Possible? Mormonism Says No.

Last month the artificial intelligence computer system “Watson” beat the two biggest winners ever on Jeopardy!. Watson is a kind of specific artificial intelligence – it’s programmed to do something very specific, which is to answer questions for Jeopardy!. As David Ferrucci, lead researcher of the IBM team that created Watson said, it can only respond to content it’s been given and analyzed; it understands language “only in a way we call statistical machine learning. It gives you the answer that makes sense to you, but it doesn’t mean anything to the computer.” [1] It can’t make a joke or do it’s own interview.

Computers excel at many tasks where human intelligence fails. And they’re getting faster and faster. But when it comes to basics of human abilities, such as “spatial orientation, object recognition, natural language, and adaptive goal-setting,” humans still win hands down.[2] Strong AI, or artificial general intelligence doesn’t exist. But some people think it will, and sooner than you might think.

Technological savant Raymond Kurzweil believes that because computers are getting faster and an ever increasing rate, that this exponential growth will eventually result in humans creating artificial intelligence that is smarter than they are. He estimates this will happen by 2045. It sounds like science fiction, and in fact this scenario is precisely what the very excellent TV series Battlestar Galactica was based on. His critics say that he underestimates the complexity of the human brain. Says biologist Dennis Bray, “Although biological components act in ways that are comparable to those in electronic circuits, they are set apart by the huge number of different states they can adopt.” He says chemical modifications on top of modifications which spread out in multiple directions result in a “combinatorial explosion of states endow[ing] living systems with an almost infinite capacity to store information.” [3] As someone trained in biology I’d have to say the argument based on the power of exponential growth falls apart for me because while living systems do experience exponential growth, this growth is always a phase, not a continual state of being. The growth curve of bacteria in culture looks more like a stretched-out letter S than a letter J. Our computing power is growing exponentially, but does that necessarily mean it always will?

An article on AI in the March issue of The Atlantic points to human adaptability as a reason artificial intelligence will never beat natural intelligence. People assume that human intelligence is static, while artificial intelligence can evolve rapidly. In the Turing test computers compete against humans to try to fool judges into thinking they are actually human. If more than 30% of the judges believe a computer is a human being, the computer wins. So far no computer has done it, but they’re getting close. Eventually, a computer is probably going to beat the Turning test. But does that mean humans are beat forever? The Atlantic article points out that after IBM computer Deep Blue beat Garry Kasparov at chess in 1997, Kasparov wanted a rematch, but IBM dismantled the computer and it never played again. Once beat, Kasparov was ready to re-tool and go for it again. I’ll bet he could have won, because he’d be able to adapt to the nature of his opponent more quickly than Deep Blue could.

Regardless of the physiological and philosophical arguments about whether strong AI is possible, I think Mormon theology says it’s not. For one thing, Doctrine & Covenants 93:29 says that intelligence is like matter and energy – it can’t be created or made. And the Book of Abraham says that intelligences existed before any physical parts of our nature. The scriptures are no doubt using the word intelligence in a different way than our everyday usage, referring to something spiritual in nature rather than just IQ. But it’s the spiritual intelligence that makes us unique as humans, I think. Could a computer ever feel the Holy Spirit? Would it ever yearn to commune with God? To create? Could it yearn for anything at all? As humans, we don’t just think, we also feel. It seems to me that if an AI system can’t do those things, it’s lacking in a significant aspect of human intelligence.

What do you think? Is the Battlestar Galactica scenario possible? Or can no one create general/natural intelligence?

1. “10 Questions” Time, March 7, 2011, pg 104.
2. “Artificial Intelligence? Why Machines Will Never Beat the Human Mind” by Brian Christen. The Atlantic, March 2011, pg 68.
3. “2045 The Year Man Becomes Immortal” by Lev Grossman. Time, February 21, 2011, pg 48.

The most important, most overlooked, most easy and most superlative tool in scripture study: Part 3 (updated)
Adam-ondi-Ahman in Africa
A problem with the LDS solution to the Problem of Evil?
Is God Omnipotent?
  • Aaron deOliveira

    i think just as we have the other qualities of god in embryo within us; most beautifully expressed in our sharing with him the clothing of a spirit in a body, we have the ability to organize intelligence into forms that foreshadow our eternal destiny.

    another thing i think about artificial intelligence is that rather than compare it to a human mind or an immortal spirit, a better comparison may be to the workings of the universe. the lord organizes worlds without end that have their rules and bounds. the planets and other celestial bodies move much the same way as a logic circuit in a computer. as described in the article, artifical intelligence functions much the same way. excelling within the bounds it is organized in. perhaps what we express in creating artificial intelligence is a line upon line mastery of the principles and priesthoods that will later be used to govern worlds without end.

  • Paul 2

    I think that if the silicon were ready, God would put a spirit in it and give it its freedom and a virtual probation experience.

  • Jim Cobabe

    Rather than attempting to estimate when artificial intelligence will surpass human intellect, I prefer to see a continuum of complementing capacity, where one type of advancement feeds the other. It works both ways. Computers get smarter, and effectively make the humans that use them smarter. Some human endeavors are augmented by teaming human with machine.

    One thing that makes a human thinking machine different from a digital computer is that humans face biological limitation that machines are not subject to. Eventually, given that they continue to progress at the current rate, machine-based capacity will reach a state where some of their abilities exceed the limit of stand-alone human performance. This has happened already, from certain aspects. For some tasks, the machine far exceeds the capacity of any human mind.

    But it is a mistake to keep the two sources separate. Eventually, the distinction will become blurred and meaningless. Our future is perhaps one where the lines between human and machine are erased.

    Judging from the number and diversity of computer applications I see today in common use, it will happen soon.

  • Rob Osborn

    Computer technology is fascinating. A computer is limited only by the minds who program it to function within it’s bounds. But it has it’s limits- it can’t make intelligent decisions on it’s own. It can only do exactly what it is programmed to do. Sure we may come up with a million different ways it can answer ina situation based off of its enviornment and conditions but it is still only operating through the power of what the programmers want it to compute. One of the greatests tests of intelligence is that of communication. Humans have the ability to adapt and make unique decisions especially when it comes to communicating to someone else. If two people do not know each other’s language they can adapt to the situation and still communicate using logic. But can a computer do this? The question becomes how does one run a program to interpret the language of input it has not been programmed to compute? Sure, there are ways a computer programmer can isolate unknowns and even run a program to search for patterns, similarities or anomalies, but there is no language or input a programmer can give a computer to adapt to learning something not in it’s computable power. This is why programmers will never truly invent a system that can write it’s own book or something novel that it can recognize as original just as it will alwys be impossible for a computer to recognize information communicated to it that is not in it’s power to compute.

  • Sandy Petersen

    I think that the Chinese Room thought experiment clearly explodes the myth that computers will ever think in the manner of humans. And I work with computers every day.

  • Ed

    “I’ll bet he could have won, because he’d be able to adapt to the nature of his opponent more quickly than Deep Blue could”

    Can you not see the irony in saying humans are adaptable and then quoting a book written more than a century ago to argue about the nature of 21st century machine intelligence?

    If the history teaches us anything; it is that status quo thinking is seldom a complete model. Do you think in a thousand years time that people will say: “2011, that was the year when they nailed what the future ability of computers will be”.