Autonomous self-driving Ubers won’t be taking you on shopping trips or cross-country vacations before 2030. That is because “artificial intelligence” is only one of those things, artificial. It is not the other.
Oh, I’m not a Luddite, but I do think I’m something of a realist. They have basic operational shortcomings that make them dangerous:
- They go blind in bad weather. Cameras can be obscured by rain, snow, dust. When lane lines disappear the vehicles don’t know where they are and when snowflakes bounce off laser sensors, they appear as obstacles to be dodged.
- Absent lane lines and curbs, travel turns into a negotiation between algorithms.
- Lacking volitional will, driverless vehicles must deal with humans who are better at driving and less predictable.
- Left turns are inherently chancy for any driver, artificial or not.
- They have killed five people: one Arizona pedestrian, three drivers in the U.S; one driver in China. That’s not as many as vending machines kill every year, but then there are fewer driverless cars.
Those problems may be solved, of course, but algorithmic programs are layered within programs buried under levels of endless code. A driverless car confronts many things unanticipated. An algorithm is really at its best when designed only for one thing: beating a chess champion or a Go prodigy comes to mind. Design one for both chess and Go it will probably need a toggle switch.
But this is called an artificial intelligence, which is little more than clever marketing. My AI robotic vacuum barely finds its way back to the charging station; 8 times out of 10 I have to carry the poor beeping thing to its home.
And here we move ourselves into the arena of consciousness; specifically the notion of artificial consciousness where, it appears, an artificial intelligence awakens to itself or is nudged into wakefulness.
I watched an episode of Star Trek Voyager (1995-2001). The holographic doctor is essentially ignored by human crew members. People do not talk to him directly in sick bay. He will seek a symptom and the patient will invariably reply not to him but to his pretty alien but flesh and blood (oh, wow. is she flesh and blood) assistant. She feels badly for the doctor (who, incidentally, can walk through walls). He is not regarded for his talent and medical skill. He is a mere computer simulation, who does surgery and other trivial things, like wave medical tricorders.
It all gets fixed without programming when he is permitted to deactivate himself when he needs quiet time, and activate himself as he chooses. He becomes volitional (save for doing evil) within the boundaries of his programming and is told he needs a name.
Ah, but is he now conscious, as if “consciousness” finally is lodged only as a physical element of the brain?
The consensus of science and much of contemporary philosophy says human consciousness is all biological, all material. Our random-firing neurons fire only in response to the urges of eons of evolutionary development.
When we can figure out where it is ultimately located, how it really works, we can map it, code it, box it up, and put it in a computer to drive cars.
But the problem to explain is the human experience of the brain. Why, as a machine, does the brain persistently insist to itself that it possesses something more, that we have a nonphysical reality and the potentiality of soul? If self-consciousness is only an evolutionary element, I do not see how conscious awareness of death is any sort of benefit. It does not require intelligence or self-conscious awareness to be a successful species.
Short answer: we are more than our parts.
The Hard Problem has boyfriend and girlfriend arguing the nature of consciousness. She, unlike her thoroughly materialist boyfriend, aches to believe that people are more than the mere summation of their biological components. “When you come right down to it,” says the girlfriend, “the body is made of things, and things don’t have thoughts.”
She is not far from Ecclesiastes, I think.
“God has made everything beautiful in its time. He has also set eternity in the human heart…” (3:11)
Russell E. Saltzman publishes every Tuesday and Thursday at noon Central Time. He can be reached on Twitter as @RESaltzman, on Facebook as Russ Saltzman, and by email: email@example.com