I just finished reading Cory Doctorow’s excellent essay collection Content, which included one essay that introduced me to a term I’d never heard before: “futurismic” (Doctorow talks as if the word was coined by someone else, though he doesn’t say who, and he seems to be the top search result using the word in his sense):
There’s a lovely neologism to describe these visions: “futurismic.” Futurismic media is that which depicts futurism, not the future. It is often self-serving — think of the antigrav Nikes in Back to the Future III — and it generally doesn’t hold up well to scrutiny.
SF films and TV are great fonts of futurismic imagery: R2D2 is a fully conscious AI, can hack the firewall of the Death Star, and is equipped with a range of holographic projectors and antipersonnel devices — but no one has installed a 15 sound card and some text-to-speech software on him, so he has to whistle like Harpo Marx. Or take the Starship Enterprise, with a transporter capable of constituting matter from digitally stored plans, and radios that can breach the speed of light.
The non-futurismic version of NCC-1701 would be the size of a softball (or whatever the minimum size for a warp drive, transporter, and subspace radio would be). It would zip around the galaxy at FTL speeds under remote control. When it reached an interesting planet, it would beam a stored copy of a landing party onto the surface, and when their mission was over, it would beam them back into storage, annihilating their physical selves until they reached the next stopping point. If a member of the landing party were eaten by a green-skinned interspatial hippie or giant toga-wearing galactic tyrant, that member would be recovered from backup by the transporter beam. Hell, the entire landing party could consist of multiple copies of the most effective crewmember onboard: no redshirts, just a half-dozen instances of Kirk operating in clonal harmony.
The future is gnarlier than futurism. NCC-1701 probably wouldn’t send out transporter-equipped drones — instead, it would likely find itself on missions whose ethos, mores, and rationale are largely incomprehensible to us, and so obvious to its crew that they couldn’t hope to explain them.
Doctorow’s definition of “futurismic” as “depicting futurism,” along with the example of antigrav Nikes, obscure what is I think the more interesting point here, about the failure of science fiction writers to think through their depictions of technology. That understanding of “futurismic” fits well with Doctorow’s proposed explanation of the futurismic–when we have gaps in our vision of the future, we tend to fill in those gaps with the present, even if the particular features of the present we’re relying on are totally inconsistent with the existence of some of the fictional technologies portrayed.
This kind of flaw in speculative fiction isn’t limited to the scientific or quasi-scientific variety, it shows up in fantasy too. One thing that makes Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality fun how it imagines ways a clever character could take advantage of poorly thought-out aspects of the setting for their own benefit (or the benefit of whatever agenda they’re pursuing). Examples of this include Harry’s speculations about a possible plan for financial domination of the wizarding world, and even more so in his (and other’s) antics with their Time-Turners.
Though I didn’t have a good word for it at the time, I previously suggested that the ability to avoid futurismic thinking may be what separates people who care a lot about the future of AI and people who shrug their shoulders and see it as one more neat toy technology will provide us with in the future. I specifically noted the example of the holodeck, which in Next Generation seems to be driven by powerful AI, but the crew never tries to exploit that AI for help with their missions.
That changes, incidentally, in Voyager, as the ship’s Emergency Medical Hologram (initially treated as non-sentient and a poor substitute for a real doctor) comes to be accepted as true member of Voyager’s crew. Through most of the series, though, he’s treated as essentially another species of alien, like a Vulcan. Occasionally, though, the writer do manage to notice his obvious advantages over the rest of the crew, such as when he reprograms himself to gain additional skills, or when the crew uses an alien communications relay to beam the doctor into Federation space (while the rest of the crew remains stuck on the wrong side of the galaxy).
Had the writers let these trends go to their logical conclusion, holograms would have soon been replacing all humans (and Vulcans and Klingons) on Starfleet ships everywhere. Unfortunately, we don’t see these changes because Voyager (the series) eventually came to an end, and the only Star Trek series produced since then was the mediocre prequel series.
In a non-futurismic world, the implications of human-level AI (including Whole Brain Emulation) would be like those of Star Trek’s transporter, and then some (where “some” might be “quite a lot,” depending on who you talk to). It would mean the ability to beam people around the world or through space at the speed of light (and yes, there would be McCoys doubting whether you really survive that process). It would mean the ability to make backup copies and run multiple copies of one person simultaneously. It would me the ability to edit their code. It would mean a lot of things no one today has thought of yet.
This is all true regardless of when you think human-level AI is likely to come, or even how likely you think it is to ever come. The point is that what won’t happen is that human-level AI develops, but business as usual more or less continues. That point, by the way, is also independent of more specific ideas associated with the term “Singularity” (and there are several of them), such as Kurzweil’s “Law of Accelerating Returns” or Eliezer Yudkowsky’s “FOOM” claims.
I should mention that Doctorow seems to think his criticisms of the futurismic are a strike against “the Singularity” (whatever he imagines that to mean), because the future will be too incomprehensible. But the incomprehensibility of the future was precisely the point made by Vernor Vinge, who first used the term to talk about the future of AI:
When this happens [the creation of greater than human intelligence], human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolations to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between–to retard progress enough so that the world remains intelligible.