They’re Made Out Of Meat

I recently had the story “They’re Made Our Of Meat” by Terry Bisson drawn to my attention. It is available to be read online, and it offers a nice, brief mirror to the way humans are prone to scoff at that which is unlike us.

In my class on religion and science fiction, we’ve spent a lot of time in recent days talking and reflecting on artificial intelligence. One moment of particular interest was when someone asked whether a human being should risk their life to try to save sentient machines in a burning building. The instinct of everyone was to answer “no.” But why? And is that the right answer just because it is one many are prone to give? If consciousness is what defines personhood, then would not sentient machines be persons? And if we are not going to give them rights or value their lives, should we create such entities?

And if we don’t value or respect them, then should we really be surprised when eventually they repay our enslavement of them and careless disregard for their existence by visiting a robot apocalypse upon humanity?

  • T. Webb

    Why should we save any persons who are in a burning building? Yes, there is some inner compulsion that would make me want to, but other than that, why? If someone was created in the image of god, maybe, but you’ve debunked that so many times here it’s not worth discussing. Otherwise, among my naturalist friends, many would hold that while life has some value, there’s so much of it that preserving a few here and there isn’t worth it (except to those who are going to perish, of course). This has come up in discussions of abortion when talking about how, say, eagle eggs are illegal to destroy, but human babies (or fetuses [sic] if you prefer Latin) can be destroyed. I’ve usually heard it said that since there are 6-7 billion people, preserving some doesn’t matter compared to the few number of eagles (some species of which are not debatable as to whether they are endangered, of course).

    From a different angle, what if these “sentient machines” can be copied like an .mp3 file and can be instantly reproduced many times over, like, say, the holographic Doctor on “Star Trek: Voyager”. If three of them are destroyed in a burning building, they can be instantly reproduced, or heck three thousand of them can be reproduced. And what if they are backed up to the cloud? Or better, what if they run from the cloud, so that their presence in the burning building is just a “footprint” that can be animated in another machine somewhere else?

    Good questions. But recall that we don’t even give rights to sentient animals like dolphins or cows yet. And won’t for a long time due to the huge profits of the meat industry.

    • http://www.patheos.com/blogs/exploringourmatrix/ James F. McGrath

      I have encountered few people whose naturalism leads them to the conclusion that human lives are not valuable. But perhaps you move in different circles than I do.

      But on the question of copying sentient machines, it isn’t clear that a copy of your or I would in fact be the same person that you or I are now, and so why should our ability to restore persons from backup mean that it is not worth saving their continuous lives if possible?

    • Tessa

      Are those eagle eggs inside a living, feeling, thinking human being? Are they interfearing with her life, risking her health and a possible cause of her death? Those precious babies of yours don’t grow in a nest of twigs somewhere harming no one. You prolifers are so quick to forget us women, aren’t you?

  • David_Evans

    It may be a result of too much reading Iain M. Banks (in whose “Culture” series some of the machines are generally agreed to be superior in almost every way to human beings) that my instinctive response would be “yes”. Qualified by the thought that the machines could presumably shut off any feelings of pain they might have, so burning to death would not be so terrible for them as for me.

  • arcseconds

    Would they save Data?

    I mean, the android from star trek, not their backups :-)

    • http://www.patheos.com/blogs/exploringourmatrix/ James F. McGrath

      I suspect some people would be more likely to run into a burning building to save data, as in their information, than to save Data.

      • arcseconds

        The reason why I ask is that one of the few things that ST:TNG does really well, is establishing Data as a sympathetic character. Even if you haven’t really thought about the issue, can’t imagine having feelings for a computer, and are a bit of a luddite, you’d have to be pretty committed to ‘machines can’t be sentient’ or ‘if it’s not human I don’t need to care about it, even if it is sentient’ to continue to insist that Data has no moral worth.

        On the flip side, I wonder whether what your students are experiencing is lack of empathy for what isn’t anthromorphic. Data (or androids more generally) might be a way to probe that.

  • http://irrco.wordpress.com/ Ian

    It is an awesome short story. Everything I love about short fiction and speculative fiction.

    Yes, I would attempt to save a sentient machine.

    If consciousness is what defines personhood, then would not sentient machines be persons?

    Yes, tautologically (assuming sentience and consciousness are synonyms for you). But I think you mean the rather less leading question about ‘what is a person’. Unfortunately, that is one of those pointless questions that is dangerous because it tricks us into thinking we’re actually discussing something significant and underlying, when we’re actually only arguing about who gets to define a word.

    The real question is, what classes of legal rights should we give to machines, based on their configuration and / or the behaviours they are capable of exhibiting. And I don’t think this is a sci-fi question. I think it is a very important question that needs approaching now. One that will become a big nasty civil rights issue soon enough. If not within one generation, then within two or three.

    When that happens, the rhetoric will be full of endless debates about the ‘real’ meanings of words like ‘conscious’, ‘intelligence’, ‘person’. “Yes but is that computer program really intelligent, or is it just doing enough calculations so that it can come up with seemingly intelligent answers?” Which will all be utterly pointless, time consuming, and tendentious.

    You could do a service to future humanity, James, by banning discussions of what would make a machine into a ‘real’ anything. And discussing them in their own terms.

    • http://www.patheos.com/blogs/exploringourmatrix/ James F. McGrath

      As I argue in my “Robots, Rights, and Religion,” we need to err on the side of giving rights rather than denying them to those who may deserve them. And we’ll likely have to face the legal issue before we can hope to have a solution to the underlying philosophical problem. I like your way of posing the matter.

      • http://irrco.wordpress.com/ Ian

        Cool.

        I wanted to get chance to write a blog post on this, this w/e, but I’m running out of time to get a more pressing job finished.

        What I wanted to follow up on was the anthropomorphism of non-human intelligence. A tendency that I think is both obvious and understandable.

        But I think there’s a very important facet of that which is highly misleading for our reasoning: the assumption of individuality and separability.

        When you read about AI in sci-fi or in many bits of futurism, AI is conceived of as being human scale, independent and modular. So we can talk about what rights we might give to ‘an’ AI. Assuming that an embodiment of AI will be an ‘a’.

        In reality, most AIs of any sophistication being built at the moment are not individual things: they are massive networks of processes, data, and algorithms with no obvious boundaries. So the vast majority of all stock-trades in the world are carried out by AI now. But can you say “here is the stock-trading AI”. No, not simply, the AI is an emergent property of certain behaviors of a large numbers of systems. Many of those systems are doing other things, that are not contributing to the AI. Some of those systems have humans in the loop – not as decision making, but carrying out algorithic processes required for the overall decision making to happen.

        The idea that AI is going to be made out of individuals that can be assigned human-like rights is to be naive of actual AI in use. It is perhaps the same kind of human tendency, the same kind of reasoning mistake, as the human desire to build personal Gods.

        To the extent that AI will have a personal form, it will be that way specifically in order to help people interact with it.

        • http://www.patheos.com/blogs/exploringourmatrix/ James F. McGrath

          I am trying to think of a good example of that sort of thing in science fiction. It appears in Fall of Hyperion by Dan Simmons, but I have a feeling that there are others. Maybe also in Ghost in the Machine?

          • http://irrco.wordpress.com/ Ian

            Hmm, its been a while since I read the Hyperion books, but I remember the AIs as still being quite distinct. Quick trip to the wiki suggests I’m remembering it right, as there are distinct characters among the AI. Good excuse to read them again, though, thanks for the tip!

            I haven’t seen Ghost in the Machine.

            • http://www.patheos.com/blogs/exploringourmatrix/ James F. McGrath

              I thought that there was also an intelligence that emerged and existed across a vast network of relays in Fall of Hyperion, but I may be running that story together with something else in my mind.

              • http://irrco.wordpress.com/ Ian

                I’ve sent it to my kindle to read. I don’t remember very much about it, its been nearly 20 years, so you’re much more likely to remember it better than me.


CLOSE | X

HIDE | X