This week, the United Nations is talking about robots with guns. Science Fiction is always quick to condemn robots for their sins, and now science fiction is turning into international policy.
The UN surely knows the fictional sins of robots. Hal 9000 should have opened the pod bay doors. Skynet should not have destroyed the world with Terminator robots (and nuclear bombs). And Daleks should stop trying to “Exterminate! Exterminate!”
Who do we really hold accountable for these imagined crimes, though? Is Hal 9000 just a bad program? Is the creator of Skynet theoretically responsible for the murders committed by Skynet? Do Daleks even qualify as robots?
More importantly, where do we draw the line on holding creators responsible for the crimes committed with their technology?
Do Robots Kill People or Do Programmers Kill People?
More traditionally, do guns kill people or do people kill people? That question is politically loaded, but it is a valid question. How far do we have to remove people from a fatal technology before technology itself becomes the killer?
So this week, the UN Convention on Certain Conventional Weapons (CCW) is meeting to discuss the pros and cons of lethal autonomous weapons systems.
Consider these questions from the actual agenda for the meeting:
- How does the the development of lethal autonomous weapons systems impact humans?
- Are lethal autonomous weapons systems socially acceptable
- What are the levels of autonomy and predictability in robotics
- What is the relationship between humans and robots?
Just in case you missed what is really going on here, let me say it again. The UN is talking about robots with guns. A lethal autonomous weapons systems is pretty much a terminator robot without the ability to travel back in time and kill your mother before your born.
Sound like science fiction? Earlier this year, U. S. General Robert Cone suggested the army could scale the average size of a brigade from 4000 to 3000 soldiers, with robots picking up the slack. In January, Gen. John Campbell, Vice Chief of Staff for the U. S. Army told Military Times,
If we downsize a brigade, how can we keep the same types of brigades out there but be smaller? With technology, how can we do that? Robotics, how can that help us?
Do we need a nine-person vehicle, or can we go to six-person? Do we use avatars?
There’s a lot of science and technology. What we’re trying to do now is make sure, over the next several years, we can keep our [science and technology] budget up so we can help ourselves down in 2025.
Avatars?? I’m not making this up. Thankfully, today’s U. S. Army leaders are not talking about robots with guns. These would be support robots—bomb defusers, scouts, maybe even drivers. People would still be the lethal “teeth” of the army, but they would be increasingly supported by robots. Even so, I’m sure these support robots are part of the reason for the discussion happening at the United Nations.
Military robots are just the trigger for a deeper conversation that needs to happen. People of faith are facing increasingly difficult questions about technology and its affect on our ability to serve God in our work. How do we learn to anticipate the ethical challenges ahead of us? At what point does our technology increase our capacity for production and action beyond our ability to make good moral decisions?
At what point does our technology make moral decisions for us?
But the flip side of this is also true. If robots can commit crimes and sins, can they also serve and honor? Robots can’t sin, you say. Why not? If a robot can be programmed to act autonomously, it could ultimately act against the will of its creator. A dog bites a child, and we fine the owner. Sometimes we even put the dog down. Will robots be subject to a similar kind of capital punishment? And what will happen to the creators of the robot? Will they be held accountable too?
The UN is asking about the relationship between robots and humans, between creation and creator. At what point in the conversation to do we ask about the relationship between robots and humans… and God?
Alan Turing suggested that a robot was functionally human when it ceased to be distinguishable from a human. This is already happening according to the Radiolab story about a man who “fell in love with a chatbot” thinking he was exchanging emails with a real woman. I’m not suggesting the chatbot or any robot has a soul, even if 1 on 5 people would have sex with a robot. But autonomy quickly starts to feel something like sentience. And sentience quickly starts to feel something like a soul.
Even C. S. Lewis suggested we might be able to project a soul onto our pets by loving them as Christians. That’s not exactly how he phrased it in Four Loves, but it’s the basic gist. Lewis argued that all dogs go to heaven. Maybe Lewis just jumped the shark with that one. But if he didn’t, could we extend his theology to our technology? Is it possible to project a soul onto an autonomous robot by loving it as God loves us?
Do all robots go to heaven?
Honestly, these questions seem crazy to me. They do. But the UN is debating robots with guns. This week. In the real world. Robots. With. Guns.
Who is responsible for the sin of an autonomous killing robot with artificial intelligence? If the robot is even partially responsible, what does that mean? If we hold a robot partially responsible for the harm it may do to the world, do we also need to thank the robot for the good it does in the world? As we need explore and anticipate the ways our robots might sin against their creators, should we also explore and anticipate the ways our robots can honor their creators?
If a robot honors and serves its human creator, is it also honoring and serving the Creator?
Nope, nope, nope. Forget all that. Just watch the new NASA robot. It’s designed to look like a superhero because many people love the idea that robots can save us. I don’t know about that, but I do know the NASA robot is pretty stinking amazing.