In a non-futurismic world, human-level AI changes everything forever

I just finished reading Cory Doctorow’s excellent essay collection Content, which included one essay that introduced me to a term I’d never heard before: “futurismic” (Doctorow talks as if the word was coined by someone else, though he doesn’t say who, and he seems to be the top search result using the word in his sense):

There’s a lovely neologism to describe these visions: “futurismic.” Futurismic media is that which depicts futurism, not the future. It is often self-serving — think of the antigrav Nikes in Back to the Future III — and it generally doesn’t hold up well to scrutiny.

SF films and TV are great fonts of futurismic imagery: R2D2 is a fully conscious AI, can hack the firewall of the Death Star, and is equipped with a range of holographic projectors and antipersonnel devices — but no one has installed a 15 sound card and some text-to-speech software on him, so he has to whistle like Harpo Marx. Or take the Starship Enterprise, with a transporter capable of constituting matter from digitally stored plans, and radios that can breach the speed of light.

The non-futurismic version of NCC-1701 would be the size of a softball (or whatever the minimum size for a warp drive, transporter, and subspace radio would be). It would zip around the galaxy at FTL speeds under remote control. When it reached an interesting planet, it would beam a stored copy of a landing party onto the surface, and when their mission was over, it would beam them back into storage, annihilating their physical selves until they reached the next stopping point. If a member of the landing party were eaten by a green-skinned interspatial hippie or giant toga-wearing galactic tyrant, that member would be recovered from backup by the transporter beam. Hell, the entire landing party could consist of multiple copies of the most effective crewmember onboard: no redshirts, just a half-dozen instances of Kirk operating in clonal harmony.

[snip]

The future is gnarlier than futurism. NCC-1701 probably wouldn’t send out transporter-equipped drones — instead, it would likely find itself on missions whose ethos, mores, and rationale are largely incomprehensible to us, and so obvious to its crew that they couldn’t hope to explain them.

Doctorow’s definition of “futurismic” as “depicting futurism,” along with the example of antigrav Nikes, obscure what is I think the more interesting point here, about the failure of science fiction writers to think through their depictions of technology. That understanding of “futurismic” fits well with Doctorow’s proposed explanation of the futurismic–when we have gaps in our vision of the future, we tend to fill in those gaps with the present, even if the particular features of the present we’re relying on are totally inconsistent with the existence of some of the fictional technologies portrayed.

This kind of flaw in speculative fiction isn’t limited to the scientific or quasi-scientific variety, it shows up in fantasy too. One thing that makes Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality fun how it imagines ways a clever character could take advantage of poorly thought-out aspects of the setting for their own benefit (or the benefit of whatever agenda they’re pursuing). Examples of this include Harry’s speculations about a possible plan for financial domination of the wizarding world, and even more so in his (and other’s) antics with their Time-Turners.

Though I didn’t have a good word for it at the time, I previously suggested that the ability to avoid futurismic thinking may be what separates people who care a lot about the future of AI and people who shrug their shoulders and see it as one more neat toy technology will provide us with in the future. I specifically noted the example of the holodeck, which in Next Generation seems to be driven by powerful AI, but the crew never tries to exploit that AI for help with their missions.

That changes, incidentally, in Voyager, as the ship’s Emergency Medical Hologram (initially treated as non-sentient and a poor substitute for a real doctor) comes to be accepted as true member of Voyager’s crew. Through most of the series, though, he’s treated as essentially another species of alien, like a Vulcan. Occasionally, though, the writer do manage to notice his obvious advantages over the rest of the crew, such as when he reprograms himself to gain additional skills, or when the crew uses an alien communications relay to beam the doctor into Federation space (while the rest of the crew remains stuck on the wrong side of the galaxy).

Had the writers let these trends go to their logical conclusion, holograms would have soon been replacing all humans (and Vulcans and Klingons) on Starfleet ships everywhere. Unfortunately, we don’t see these changes because Voyager (the series) eventually came to an end, and the only Star Trek series produced since then was the mediocre prequel series.

In a non-futurismic world, the implications of human-level AI (including Whole Brain Emulation) would be like those of Star Trek’s transporter, and then some (where “some” might be “quite a lot,” depending on who you talk to). It would mean the ability to beam people around the world or through space at the speed of light (and yes, there would be McCoys doubting whether you really survive that process). It would mean the ability to make backup copies and run multiple copies of one person simultaneously. It would me the ability to edit their code. It would mean a lot of things no one today has thought of yet.

This is all true regardless of when you think human-level AI is likely to come, or even how likely you think it is to ever come. The point is that what won’t happen is that human-level AI develops, but business as usual more or less continues. That point, by the way, is also independent of more specific ideas associated with the term “Singularity” (and there are several of them), such as Kurzweil’s “Law of Accelerating Returns” or Eliezer Yudkowsky’s “FOOM” claims.

I should mention that Doctorow seems to think his criticisms of the futurismic are a strike against “the Singularity” (whatever he imagines that to mean), because the future will be too incomprehensible. But the incomprehensibility of the future was precisely the point made by Vernor Vinge, who first used the term to talk about the future of AI:

When this happens [the creation of greater than human intelligence], human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolations to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between–to retard progress enough so that the world remains intelligible.

Oh, and in case you’re wondering what the hell the picture’s about…

  • eric

    …obscure what is I think the more interesting point here, about the failure of science fiction writers to think through their depictions of technology.

    That’s because most (or at least a significant chunk) of sci-fi is social commentary about this world. Many sci-fi authors tweak our rules or technology to either make a point or to make the setting just foreign enough that the reader becomes willing to accept an unusal idea (because its no longer ‘about them.’) Star Trek was not just about a galactic federation; it was about mid-20th century America and the problems associated with it. Do Androids Dream of Electric Sheep was not just about clones; it was about how “being human” is more than just your genes, its your culture and experience. If you complain that Philip K. Dick made an unrealistic future because robots are far, far better space explorers than clones, you have missed the point of the story. Doctorow is very likely missing the point of many of the sci-fi stories he complains about. Precursors to sci-fi and fantasy acted the same way; Aesop’s fables have talking animals in them, but anyone focusing on that as a flaw is doing a superficial reading at best.
    This does not excuse all artistic license. The Star Trek franchise is notorious for being self-inconsistent and relying heavily on technobabble-based deus ex machina. Which is bad storytelling regardless of whether your story is “really about” the future or the present. So I’m not saying a reader should accept any story premise no matter how ridiculous it may be. But if you accept nothing unrealistic, you may be missing what the storyteller is trying to say.

    • Annatar

      +1

      Science fiction, even hard science fiction, isn’t really about the science. It’s about humanity, personhood, rights in a society, and our place in the universe.

      • smrnda

        I tend to think of it the same way. Authors aren’t doing predictions, but they are distorting the world of the future in order to make a point about the world of the present. It’s not typically considered ‘sci fi’ but Naked Lunch has some of the tropes of advanced technology and aliens of various sorts, but it’s clearly just a way of pushing the extremes of the present into ever great extremes for the sake of the story and making a point.

    • http://thebronzeblog.wordpress.com Bronze Dog

      +1 again.
      Another way I see it is that science fiction and fantasy are a method for creating strange “what ifs” and deriving how that would affect us. The strangeness allows the audience to suspend disbelief, which can then be used to sneak in social commentary and get people to think about today’s issues differently. For bonus points, you can make a plot about a hypothetical future technology as a sort of thought experiment to see how it would affect the culture and how it would challenge the audience.
      Of course, there’s also a place for authors who want to try to extrapolate the future and its weirdness more accurately than pop fiction. It can be interesting enough as an intellectual challenge and potentially helpful if they accurately anticipate a problem and get it widely discussed before the technology in question gets invented.

    • http://skepticsplay.blogspot.com miller

      It certainly depends on what kind of sci-fi you’re reading. Sometimes speculation is a big part of the appeal. I mean, a lot of my friends will recommend this book or that one because “it had such amazing foresight”. I personally don’t care for the speculative aspect though.

  • http://theotherweirdo.wordpress.com The Other Weirdo

    I’m not sure if R2D2′s—the most vulgar character of all time since all of his lines were bleeped out—proves the point you think it proves, since there are androids and robots in Star Wars that have no problem speaking. When hooked up to a fighter, his beeps are translated into writing that the pilot can read, C3PO can understand him no problem.

    In Star Trek, AI has traditionally not been treated as a good thing. There were a number of incidents, the Nomad and V’Ger incidents being probably the two best known, not to mention that incident where they put a computer in charge of the Enterprise and it killed 2 starships because it couldn’t tell the difference between a game and an actual attack, so it used full power on its weapons.

    In Voyager, too, AI is not treated all that well, with the exception of the Doctor himself. There was the hologram that murdered all its crew, the crazy Cardassian missile, the androids who murdered their creators when the creators wanted to make peace but the androids wanted to continue fighting, the AIs that the Voyager crew gave to the Hirogen. Even the AI on the Prometheus wanted to hide rather than fight the Romulans.

    As for the transporter, they did explore what happens when that particular technology goes awry and creates duplicates of people or turns them into kids. Voyager, too, played with that to a degree.

    The holodecks aren’t necessarily driven by AI(in character interaction, I mean), they are basically following a script. It’s like reading one of those books where you have to pick the outcome of every scene which leads to different scenes.

    Holodeck characters have attempted to help the crew, but invariably they were of limited use, some solutions required the computers be shut off and the crew do things by hand, but mostly the real people(within the universe) on whom the holodeck characters were based would usually get freaked out upon learning that the crew interacted with their virtual selves. Then there was that time with Data and Moriarty. Considering how often the holodeck failed or outright tried to kill the ship or crew, I’d say it’s a good thing they didn’t use it as an adviser.

    On Andromeda, for example, things were a little different. The AI was fully in control of the ship, had holographic, video-screen and android avatars(all with different personalities) and advised the captain on many occasions. Still, people had to be in actual command, and there, too, unfettered AIs were generally quite insane and omnicidal.

    In truth, we don’t know what shape AI will take when we’ve finally built it. Will it be like the machines in Matrix, Shub in Deathstalker, Andromda in… well, Andromeda, or something else entirely? We don’t know. More importantly, however, however, is the other thing we don’t know: what human reaction to AI will be. Sci-fi writers are limited by the knowledge and understanding of the day, but then so are futurists and futurismists.

    P.S. Yes, I’m sci-fi geek.

    • http://thebronzeblog.wordpress.com Bronze Dog

      I’m a Trek fan, myself, and yeah, there’s a lot of cynicism about AI in the series. Throwing in Data accomplishes mixed messages, and even then they use a lot of discredited tropes with him like the lack of emotion, though arguably they subvert that one pretty often.
      Heck, there’s bouts of cynicism about technology in general. One episode with Kirk on trial introduces his lawyer using physical books preferentially over a computer terminal and the elephant in the room during the court martial is that the judges blindly accept the infallibility of a falsified computer record. Back on M-5, the computer put in control of the Enterprise, when it attacks the ships for real, they leap to the conclusion that it’s Kirk who’s gone out of control, not the computer, so that’s a straw man flogged twice. Let’s not forget the various space hippies who decide to shun technology and get treated sympathetically.
      Transhumanism gets pretty shunned. Genetic enhancements like Bashir’s are illegal and cybernetics are generally limited to replacement organs and prosthetics. Outside the Federation, genetic engineering produced the Dominion’s Jem-Hadar super soldiers, and cybernetics produced the Borg. Early in TNG, they have the deeply cyberized Binar race who didn’t think to ask for help repairing their homeworld’s main computer because they’re “trapped” between binary extremes of thought: They didn’t ask for help because it was possible the Federation would refuse, so they jump to hijacking the Enterprise.
      As much as I love the show, it gets pretty hokey and can fall into values dissonance when an episode’s writing goes particularly bad.

      One thing that strikes me as odd about a lot of AI sci-fi is the assumption that humans and governments will stubbornly treat human-like AIs as mindless property, even if they’re visibly emotional and intelligent, often simply because they’re not carbon-based like “real” life. But then again, that’s my idealism talking. Humans have been doing that to other humans for far longer and for even more trivial reasons.

      • http://theotherweirdo.wordpress.com The Other Weirdo

        I don’t know. Discredited tropes? Maybe now, in 2012, but in 1987? Were they really already discredited back then? Remember, that show started 25 years ago.

        Hell, I forgot about the Binar race. As for the Bashir and his ilk, that one could be chalked up to enduring cultural trauma from the Eugenics Wars.

  • smrnda

    I have to add, I actually *do AI programming* for a living, so when I extrapolate into the future or think up possible dilemmas, I get a much less adventurous picture, and I can also imagine seemingly incredible technologies just not catching on for one reason or another.

    Another challenge (at least to me) is human intelligence is tied in with our physical bodies and how we need to survive. Our bodies and the world we live in create problems our minds needed to solve, and the tasks we’re given are incredibly unpredictable. Machines (so far) are getting a very limited sample space and a very limited set of objectives and are mostly used as specialized tools. I’m not sure if human-level or type AI is ever going to be achieved not because of a lack of ability on our part, but just since ‘machine intelligence’ is going to be serving a different set of needs than human intelligence. On the bodies end, we kind of see ourselves as being part of a physical body – would an AI program feel that it’s ‘part’ of the machine it’s in, or what? I think it would have a very strange understanding of the concept of personal space since the issue might not be relevant for it at all. “Where am I?” might be a nonsense question to an intelligent AI.

    • http://thebronzeblog.wordpress.com Bronze Dog

      I can imagine a lot of neurology issues overlap here, since we’ve got a lot of weird things that can happen when we get brain damage. People can lose the feeling of ownership of their body and need “reminders” to temporarily overcome the instinctive non-recognition. I don’t recall a specific example, but I think we lack a lot of “error checking” that lets us know something is missing. I suppose that might be part of phantom limbs: The brain’s body map isn’t updated to account for the missing limb.

      I’m not in the field, but I get the impression that we’re going to be advancing AI in a “bottom up” quasi-evolutionary manner, getting AIs to learn and optimize abilities, and new generations would be built on the work of the previous generations, which could lead to unexpected quirks that result from accumulated legacy artifacts.

      AIs developed to operate in a wide variety of bodies might feel like detached puppeteers or maybe they could have a feeling of ownership alongside phantom limbs for the “missing” body parts. If they have a mental body map deep in their legacy code, they might subconsciously think they’re transforming instead of completely swapping bodies. It’s been interesting to contemplate the possibilities.

      • smrnda

        What I was thinking in the body problem – let’s say I have a highly intelligent AI you can chat with. If you chat with me and I need to look something up online, I distinctly note that the computer and website are distinct from me. Would a program think of them as distinct, or just an extension of their existing program, the way that if my mind uses my hand, I don’t think of myself as a brain in control of a hand. In that case, “AI bots” might not share our notion of being physically distinct from each other or from the outside world. Is a program accessing a website going into the website, or is it putting the website inside itself?

        • eric

          Actually, body perception is learned/experiential behavior, and there are some simple experiments that show that even a few minutes of contradictory feedback can fool your brain into thinking something not biologically part of you is “you” (or the reverse – that one of your actual limbs is not “you”). You constantly monitor the world around you to figure out what’s you and what isn’t – its not fixed. See here for one example, but I did not search long – if you google with effort you can probably find more and better examples.

          I guess the relevance here is that you could probably program some algorithm into an AI that would do the same basic thing. Send a ping signal to other hardware. If the feedback is correct (by some criteria), the AI considers it part of itself. If nothing comes back or the wrong feedback comes back, it isn’t.

          • smrnda

            I think you make a better point that our notions of ‘part of us’ and ‘not part of us’ are conditioned. I doubt many people who have fillings in their teeth think of their mouths as full of alien artifacts, but they would probably think of contact lenses as less a part of them.

            My take on AI is that, with so many cloud computing applications, a single application is using hardware from all over the place, and any limits set to test for would be arbitrary. Then again, your comment makes me think that our human notions are probably arbitrary as well. Since I can’t survive without them, I might as well consider air and sunlight essential parts of me.

  • mikespeir

    I’ve written some science fiction, and I understand this problem very well. It’s good to remember, though, that every story is about US, no matter who or what it purports to be about. It’s not about people or robots or aliens 300 or 5000 years in the future or 10,000 years in the past. It’s about us. It has to relate to us, be something we can identify with. For that reason a piece of fiction has to be, to a large extent, about the here and now, regardless the chronological setting. If it’s not, it won’t resonate with the reader.

  • qbsmd

    I remember some of those issues lampshaded or explained in Star Trek. For example, there was a seen in Enterprise where a Vulcan explained that Vulcan exploration ships send dozens or hundreds of robotic probes to the surface of a new planet to map, analyze the atmosphere, soil, microorganisms, etc. and then decide whether the planet is useful for colonization, mining, scientific interest, or anything else. The human captain’s response was screw logic, humans are explorers and can’t look but not touch, and can somehow send an away team with tricorders to one spot to make measurements and do just as good or better of a job as robots using tricorders to make measurements at thousands of locations.

    And they addressed the issue of using transporters to store people multiple times, and basically the answer is that Moore’s Law hasn’t quite caught up to that point yet, so the actual memory the transporter uses is very volatile and almost a cheat: on TNG, Scotty had managed to store himself successfully long term but his colleague was lost, and this method required the transporter to continuously do something active rather than just sit and store. On DS9, they actually have enough memory to store a few people, but have to overwrite all of the station’s memory to do it. Of course it’s unclear how much volume that much memory takes up.

  • Pingback: In praise of boring claims

  • Darren

    I recently watched an old episode of The Simpsons, “Lisa’s Wedding”. I recall when the episode originally aired, in the spring of 1995. The episode depicts a world 15 years into the future, 2010. I recalled how far away 2010 seemed in 1995 (when I had just graduated from college and was adjusting to my new life in the workforce). Even so, the few futurisms in the show did not seem any more plausible than the Back to the Future hoverboards (holographic trees, Jetson’s flying cars, androids, cryogenics).

    What is really interesting is all of the stuff that is _not_ in the Simpson’s future. No internet, no smart phones, no social media, no Twitter, no wireless hotspots, no mp3’s, no globalization, no global warming. All the things that are so much a part of our daily lives, so much of our cultural gestalt, that were not even though about. And this was only 17 years ago, not post-AI, not post-singularity…

  • Voidhawk

    There are a few SF authors who attempt to deal with the vast differences AI would bring but for the most part, authors are limited by the need to keep their stories relevant to modern readers.

    As a trite example it’s easy to emphasise with a romance story about a human falling in love with an AI which is essentially a ‘human mind in a box’ because we recognise the situation, recognise the problems with acceptance and accessability
    It’s harder to emphasise with the ‘romance’ between two AIs who don’t have a concrete sense of self and identity, who share themselves intimately with one another by swapping code, who can copy themselves and each other. That isn’t to say that such a story wouldbe impossible to write or to care about, but the characters, situation and problems would be so alien as to make empathy much more difficult.

  • Pingback: Star Trek’s reactionary take on human enhancement

  • Pingback: yellow october


CLOSE | X

HIDE | X