Last month the artificial intelligence computer system “Watson” beat the two biggest winners ever on Jeopardy!. Watson is a kind of specific artificial intelligence – it’s programmed to do something very specific, which is to answer questions for Jeopardy!. As David Ferrucci, lead researcher of the IBM team that created Watson said, it can only respond to content it’s been given and analyzed; it understands language “only in a way we call statistical machine learning. It gives you the answer that makes sense to you, but it doesn’t mean anything to the computer.”  It can’t make a joke or do it’s own interview.
Computers excel at many tasks where human intelligence fails. And they’re getting faster and faster. But when it comes to basics of human abilities, such as “spatial orientation, object recognition, natural language, and adaptive goal-setting,” humans still win hands down. Strong AI, or artificial general intelligence doesn’t exist. But some people think it will, and sooner than you might think.
Technological savant Raymond Kurzweil believes that because computers are getting faster and an ever increasing rate, that this exponential growth will eventually result in humans creating artificial intelligence that is smarter than they are. He estimates this will happen by 2045. It sounds like science fiction, and in fact this scenario is precisely what the very excellent TV series Battlestar Galactica was based on. His critics say that he underestimates the complexity of the human brain. Says biologist Dennis Bray, “Although biological components act in ways that are comparable to those in electronic circuits, they are set apart by the huge number of different states they can adopt.” He says chemical modifications on top of modifications which spread out in multiple directions result in a “combinatorial explosion of states endow[ing] living systems with an almost infinite capacity to store information.”  As someone trained in biology I’d have to say the argument based on the power of exponential growth falls apart for me because while living systems do experience exponential growth, this growth is always a phase, not a continual state of being. The growth curve of bacteria in culture looks more like a stretched-out letter S than a letter J. Our computing power is growing exponentially, but does that necessarily mean it always will?
An article on AI in the March issue of The Atlantic points to human adaptability as a reason artificial intelligence will never beat natural intelligence. People assume that human intelligence is static, while artificial intelligence can evolve rapidly. In the Turing test computers compete against humans to try to fool judges into thinking they are actually human. If more than 30% of the judges believe a computer is a human being, the computer wins. So far no computer has done it, but they’re getting close. Eventually, a computer is probably going to beat the Turning test. But does that mean humans are beat forever? The Atlantic article points out that after IBM computer Deep Blue beat Garry Kasparov at chess in 1997, Kasparov wanted a rematch, but IBM dismantled the computer and it never played again. Once beat, Kasparov was ready to re-tool and go for it again. I’ll bet he could have won, because he’d be able to adapt to the nature of his opponent more quickly than Deep Blue could.
Regardless of the physiological and philosophical arguments about whether strong AI is possible, I think Mormon theology says it’s not. For one thing, Doctrine & Covenants 93:29 says that intelligence is like matter and energy – it can’t be created or made. And the Book of Abraham says that intelligences existed before any physical parts of our nature. The scriptures are no doubt using the word intelligence in a different way than our everyday usage, referring to something spiritual in nature rather than just IQ. But it’s the spiritual intelligence that makes us unique as humans, I think. Could a computer ever feel the Holy Spirit? Would it ever yearn to commune with God? To create? Could it yearn for anything at all? As humans, we don’t just think, we also feel. It seems to me that if an AI system can’t do those things, it’s lacking in a significant aspect of human intelligence.
What do you think? Is the Battlestar Galactica scenario possible? Or can no one create general/natural intelligence?
1. “10 Questions” Time, March 7, 2011, pg 104.
2. “Artificial Intelligence? Why Machines Will Never Beat the Human Mind” by Brian Christen. The Atlantic, March 2011, pg 68.
3. “2045 The Year Man Becomes Immortal” by Lev Grossman. Time, February 21, 2011, pg 48.