Some people are optimists when it comes to AI, thinking that the technology will benefit humanity. Other people are pessimists, worried that AI might take over and eradicate the human race. And there are positions somewhere in between.
But there is another faction that believes AI will take over and eradicate the human race and that this will be a good thing.
David Price has written about these so-called “cheerful apocalyptics” in his Wall Street Journal article entitled AI Doom? No Problem.
He recounts a conversation at a birthday party for Elon Musk about the safety of AI. A guest at the party, MIT professor Max Tegmark, recounted the argument being made by Larry Page, the co-founder and former CEO of Google:
Page made a “passionate” argument for the idea that “digital life is the natural and desirable next step” in “cosmic evolution.” Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win.
Musk objected, saying that this would mean the end of humanity. Page responded by calling him a “specieist,” someone bigoted in favor of his own species. Musk accepted the description.
This was back in 2015, seven years before ChatGPT, but Price found that this is not an uncommon view today in the tech world. He quotes Canadian AI researcher Richard Sutton:
The argument for fear of AI appears to be:
1. AI scientists are trying to make entities that are smarter than current people.
2. If these entities are smarter than people, then they may become powerful.
3. That would be really bad, something greatly to be feared, an ‘existential risk.’
The first two steps are clearly true, but the last one is not. Why shouldn’t those who are the smartest become powerful?
Price interviewed Sutton:
“When you have a child,” Sutton said, “would you want a button that if they do the wrong thing, you can turn them off? That’s much of the discussion about AI. It’s just assumed we want to be able to control them.”
But suppose a time came when they didn’t like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that?
“I don’t think there’s anything sacred about human DNA,” Sutton said. “There are many species—most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we’re no longer the most interesting part? I can imagine that. . . .”
“If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK.”
Price asked Jaron Lanier, also prominent in the AI field, how common such views were. “The number of people who hold that belief is small,” according to Lanier, “but they happen to be positioned in stations of great influence. So it’s not something one can ignore.”
Lanier said he hears this kind of talk quite a bit at parties and conferences when AI techies get together. “There’s a feeling that people can’t be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way.”
Price goes on to cite Mind Children by roboticist Hans Moravec, written in 1988, as becoming a manifesto for this movement, laying out the notion later pursued by various science fiction writers that robots would become more intelligent than we are and would–and should–replace us.
Price also cites podcaster and blogger Daniel Faggella, a popularizer of these ideas who argues that we should work to usher in a “worthy successor to human kind.” “The eternal locus of all moral value and volition until the heat death of the universe will not be f——ing opposable thumbs,” Faggella told Price. “I’m not sure opposable thumbs are steering the ship in, like, 20 years.”
Price, drawing on other critics, note that the Cheerful Apocalyptics tend to despise the human body. And for them, “intelligence” trumps all other values and considerations.
Lanier reports that a common complaint he hears from this crowd is that people who have children tend to be biased in favor of the human species! It’s odd to me that for all of their invocation of evolution–as in thinking that AI is the next step in the evolution of the cosmos–these folks are so little interested in the survival and transmission of their own species. That doesn’t sound very Darwinian to me.
Price, to his credit, cuts to the real issues:
One possible response is the Judeo-Christian idea that humankind was uniquely created in God’s image. Of course, the Cheerful Apocalyptics would see any such spiritual belief as inadmissible. But their view of intelligence alone as conferring rightful supremacy is itself a spiritual belief that needs to be defended or rejected. What does it imply for the moral rights of less intelligent humans versus smarter ones? What does it mean for theories of justice that are founded on the equal moral worth of persons?
The whole school of thought can sometimes feel like the ultimate revenge fantasy of disaffected smart kids, for whom the triumph of their AI proxies amounts to sweet victory over lesser mortals.
The nihilism of the Cheerful Apocalyptics is pathetic, really, a mashup of pride and self-loathing, technological fantasy and contempt for others. They are trapped in a technological gnosticism. Having children might indeed help them appreciate what it means to be human. Having children might also teach them the difference between virtual reality and actual reality, artificial intelligence and common sense.
Illustration: AI image generated by the author via ChatGPT











