When I moved this blog to Patheos, I was asked to pick a tagline and I ended up going with the first thing that came to mind: “Philosophy, atheism, killer robots.” Roll your cursor over the browser tag, you should be able to see it This was not a throw-away joke: I really do mean to use this blog to write about killer robots.
To that end, I want to talk about the fact that South Korea is leading the way in building killer robots. Specifically, it’s leading the way in building fully autonomous military robots. The word “fully autonomous” is key here. You hear a lot about the use of drone warfare by the US, but current US drone warfare keeps a human in the loop–a human needs to pull the trigger in order for the drone to kill someone. Recently, the Pentagon has tried to insist humans will always be “in the loop.”
The South Korean military, on the other hand, has already deployed autonomous sentry guns capable of being set to identify and fire on targets in a fully autonomous manner (though allegedly this setting isn’t being used). Why is South Korea leading the way here? There’s a good reason for it, and it has nothing to do with stereotypes about Asian technophilia.
To understand what’s going on here, it’s worth mentioning a little fact I learned while in Cambodia: Cambodia has the highest number of amputees per capita in the world. That’s because during Cambodia’s civil war in the 1970s, lots of landmines got planted and after the shooting stopped the landmines didn’t magically disappear.
To give you another idea of how bad the problem is, if you visit Cambodia as a tourist, the guidebooks will tell you that once you get out of the city centers, you shouldn’t stray from well-trodden paths. That’s because there could be landmines, even in tourist-heavy areas. Cambodia is exhibit A for why there have been attempts to ban landmines internationally.
Except the United States is not totally on board with the idea of banning landmines. That’s because we’ve mined the fuck out of the Korean Demilitarized Zone (or DMZ for short), the narrow strip of land that divides North and South Korea. And arguably, this is morally justifiable, because everyone in Korea knows about the DMZ and knows that if you randomly wander in to the DMZ, you not only risk stepping on a landmine but also risk getting shot. So the chances of some kid wandering into the DMZ and stepping on a mine is low.
(As an aside, the Korean DMZ is one of the most hilariously misnamed places on the planet. In spite of the name, both sides of the DMZ are actually heavily fortified.)
And that’s why South Korea can lead the way on killer robots. A really dumb killer robot (at the level we actually know how to build right now), that just shot anything that moved, would be hugely problematic for the same reasons that landmines are hugely problematic. But maybe something like that could be okay for a place like the DMZ (not that the Korean robot is even quite that indiscriminate–supposedly it can tell when a soldier is surrendering, though I can’t confirm how well that function works).
Yet this development still seems worrisome. Grant for the sake of argument that it would be okay to deploy a not-very-discriminating robot to the DMZ. We should still be worried that the military might be tempted to try to extend that logic to cases where it really doesn’t apply. Like, I don’t know, we could establish a no-fly zone over Iran and have relatively dumb autonomous drones police it on the theory that anyone who violates the no fly zone should have known better, but then we don’t publicize it well enough and oh crap one of our drones just blew up a plane with a bunch of kids on it.
One of the things that convinces me that these worries need to be taken seriously is reading the recently released book Kill Anything That Moves: The Real American War in Vietnam (which I plan to devote a post to in the near-future). Among the borderline-genocidal policies described in the book is the use of “free-fire zones”: areas where theoretically everyone knew you weren’t supposed to go, and therefore anyone there could be assumed to be VC and shot, but which an awful lot of civilians failed to stay out of for various reasons and got killed.
The lesson from free-fire zones in Vietnam is simple: a policy that looks good on paper (“we don’t have to worry about civilian casualties, because there will be no civilians in the free fire zones”) can be murderous in practice. Something similar could happen with our killer robot policy. For that reason, I suspect the best thing to do may be to ban fully autonomous killer robots (“man in the loop” drones are another manner) entirely until we can build robots with much better systems for distinguishing legitimate and illegitimate targets.
Page image from Texts from Drone.