More on why you should be worried about autonomous killer robots

I just stumbled across a great Popular Science article titled, “The Terminator Scenario,” which gives a good explanation of why we should be worried about the coming of autonomous military robots. Key quotes (emphasis mine):

I asked that question of Werner Dahm, the chief scientist of the Air Force and the lead author on “Technology Horizons.” He dismissed as fanciful the kind of Hollywood-bred fears that informed news stories about the Navy Fire Scout incident. “The biggest danger is not the Terminator scenario everyone imagines, the machines taking over—that’s not how things fail,” Dahm said. His real fear was that we would build powerful military systems that would “take over the large key functions that are done exclusively by humans” and then discover too late that the machines simply aren’t up to the task. “We blink,” he said, “and 10 years later we find out the technology wasn’t far enough along.”

Dahm’s vision, however, suggests another “Terminator scenario,” one more plausible and not without menace. Over the course of dozens of interviews with military officials, robot designers and technology ethicists, I came to understand that we are at work on not one but two major projects, the first to give machines ever greater intelligence and autonomy, and the second to maintain control of those machines. Dahm was worried about the success of the former, but we should be at least as concerned about the failure of the latter. If we make smart machines without equally smart control systems, we face a scenario in which some day, by way of a thousand well-intentioned decisions, each one seemingly sound, the machines do in fact take over all the “key functions” that once were our domain. Then “we blink” and find that the world is one we no longer are able to comprehend or control.

Placing such faith in our military machines may be tempting, but it is not always wise. In 1988 a Navy cruiser patrolling the Persian Gulf shot down an Iranian passenger plane, killing all 290 aboard, when its automated radar system mistook the aircraft for a much smaller fighter jet, and the ship’s crew trusted the computer more than other conflicting data. Several scientists at Wright-Patterson mentioned learning from such classic examples of over-reliance on machines, which even in civilian aviation has led to fatal accidents. As often as not, such military machine mishaps are the result of what a vice president at a robotics company described to P.W. Singer as “oops moments,” the kinds of not-uncommon mistakes that occur with the technology at such an early stage of development. In 2007, when the first batch of the armed tank-like robots called SWORDS were deployed to Iraq—and then quickly pulled from action—a story spread about one aiming its guns on friendly forces.

The robot’s manufacturer later confirmed that there were several malfunctions but insisted that no personnel were ever endangered. A C-RAM in Iraq did target a U.S. helicopter, identifying it incorrectly as incoming rocket fire; why it held its own fire remains unclear. A soldier back from duty in 2006 told Singer that a ground robot he operated in Iraq would sometimes “drive off the road, come back at you, spin around, stuff like that.” That same year, Singer says, a SWORDS inexplicably began whirling around during a demonstration for executives; a scene out of the movie Robocop was avoided because the robot’s machine gun wasn’t loaded. But at a crowded South African army training exercise in 2007, an automated anti-aircraft cannon seemed to jam and then began to swivel wildly, firing all of its 500 auto-loading rounds. Nine soldiers were killed and 14 seriously wounded.

It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”

The precursors to such systems—the remote-control machines that perform many operations the military calls too “dull, dirty, and dangerous” for humans—are in wide use, and not without consequence. CIA drone crews unable to adequately discriminate between combatants and noncombatants have so far killed as many as 1,000 Pakistani civilians. And truly autonomous systems are taking on increasingly sensitive tasks. “I don’t know that we can ignore the Terminator risk,” Lin says, and as an example suggests not some killer drone but rather the computers that even now control many of our business operations. Last spring, for instance, one trading firm’s “sell algorithm” managed to trigger a sudden 1,000-point “flash crash” in the stock market: “It’s not that big of a stretch to think that much of our lives, from business systems to military systems, are going to be run by computers that will process information faster than we can and are going to do things like crash the stock market or potentially launch wars. The Terminator scenario is not entirely ridiculous in the long term.”

One last bit: I was going to make a joke about how the reassuring takeaway is that the military is working on these issues, and the not-so-reassuring takeaway is also that the military is working on these issues. Then I got to the end of the article…

Wendell Wallach emphasizes the “tremendous confusion out there about how autonomous robots will get. We have everything from ‘We’re only two decades away from human-level AI’ to people who think we’re 100 years away or may never get there.” Even at the Air Force lab, where I was shown the cutting-edge work in the field, I left without a clear sense of what we might expect in the years ahead. Siva Banda, the head of control theory there, told me the Air Force understood well the standards and specs required to build a manned aircraft. “But our knowledge when it comes to UAVs—we’re like infants. We’re babies.” Indeed, it’s not clear that anyone is taking the lead on the matter of military machines. After P.W. Singer recently briefed Pentagon officials on his Wired for War, a senior defense-department strategy expert said he found the talk fascinating and had a question: “Who’s developing and wrestling with the strategy for all of this?” Singer, who has now given many such talks, explained to the official: “Everyone else thinks it’s you.”

  • http://johnivorjones.blogspot.co.uk/ John Jones

    My charge of animism against your position, which you ignored, seems correct. The idea that we can make a man-doll out of cloth, rubber, or metal to move in ways that make us think that it is alive- independent, intelligent, as you say, and that by making us so think, allow us to baptise (for it is a baptism) the man-doll with a life is, even apart from your belief in the magic of baptism, an old religious belief, an idiotised animism.

    It is curious that some of the most religious people on the planet today are atheists, materialists and scientific skeptics. Even Richard Dawkins’ books are crammed full of religious doctrine, often blatantly modelled on biblical examples. But rest easy, the news hasn’t got out yet.

    • Mark

      What a silly comment.

    • Randy

      John Jones failed to address the article, which is about poor design, and a failure to adequately test products before they are released. While tests should always be designed first, that obviously is not how most software gets created, hence the sheer volume of bugs (including viruses). We’re creating software today that is effectively untestable, because we’re unwilling to devote the resources to actually doing it.

      There is no part of this that has anything to do with religion, except insofar as religion says it does, and in that case religion would be wrong.

      • Signe

        The problem isn’t that they’re not tested, it’s that we don’t know how to design the tests. We don’t know what “testing properly” would even look like. The community is trying to define what things we need to test, but that doesn’t mean we have a handle on how to test a fully autonomous system that can adapt to its environment and on how to figure out whether its adaptations are going to ensure that it continues to operate properly instead of learning the wrong thing.

        But you are right that this post isn’t about religion.

      • Compuholic

        Like Signe already said: In many cases we don’t know what “testing properly” would look like.

        Even simple computer programs nowadays have thousands of variables, and everyone of those can take millions of different values. It is basically impossible to design tests which tests for the correct behavior under all circumstances. When unit testing, programmers usually limit themselves to certain critical values like testing if values at the upper or lower end of the specified spectrum are handled correctly.

        But even if that works out well it says nothing about errors that might arise when the different components are interacting with each other. This is even harder to test reliably. Finally most programs nowadays make use of multithreading. Multithreading error usually cannot be reproduced because you need to be lucky/unlucky enough that the threads execute the critical code sections at precisely the same time. So even if you run the same test 1000 times you are very unlikely to detect multithreading problems.

        There are techniques that at least allows you to find design faults: Model checkers are such tools. The problem is that you need to have an abstract system model and all the specification in the form of logical formulas. As you can imagine this is extremely time consuming and cost intensive. So those methods are usually only used for safety-critical systems. But you only can check your system design. It offers no guarantee that the programmer will implement your design correctly.

        And for robots it is even more complicated than that. Because the actions of the robot are usually dependent on what the sensors are measuring. So not only you would need to test everything i mentioned above but also every possible sensor input. This is basically impossible to do.

  • jay

    We’re already seeing those issues with increasingly ‘intelligent ‘ automobile control systems, and Google type self driving cars are road legal in a number of states. Some argue that within a decade they will become norm (one futurist said his son will probably never drive a car.)

    The questions are similar, but the problem I see is that we ‘don’t know what we don’t know ‘. I’m not sure how you’d ever validate such a thing.

  • eric

    CIA drone crews unable to adequately discriminate between combatants and noncombatants have so far killed as many as 1,000 Pakistani civilians.

    One social problem I didn’t see mentioned in this or your earlier article was our tendency to change our doctrines, standards, and policies when a system we want to use isn’t up to the job of meeting the standards we have. The adoption of drones was quickly followed by the US altering its definition of combatant to, essentially, any male of military service age in the area. When troops go door-to-door that certainly isn’t the standard – they don’t walk into a house and kill all the men – but it is for drones because that’s the technological limit of what they can detect.
    I worry about a similar social problem with advanced machine-controlled systems. If, say, some automated defense system is faster, cheaper, and makes less type II errors (misses less bad guys) but more type I errors (kills more innocents), we might just accept the innocent deaths as the price of using it.
    This is not a machine problem, its a social problem of weighing (other countries’) human lives against other things like cost. But the temptation to weigh life lower is certainly exacerbated by having machines that do lots of things wonderfully and cheaply yet result in some number of addtional (foreigner) deaths. It is harder to stand on principle when you’ve got an almost ideal machine for the job.

    • Compuholic

      While I agree that we often alter our specifications to match what technologically can be done I am not sure that it applies to drones.

      In fact I think they can even be helpful in minimizing collateral damage. Drones are usually used for surveilling areas (e.g. to monitor roads for people planting IEDs, or training camps). I agree that the resolution of the sensors is usually very limited and that you are unable to clearly recognize people or weapons. But you also gain information that you would normally be unable to get. Since drones are moving fairly slow and don’t burn a lot of fuel you can monitor a person for hours. In many cases you don’t need to recognize people or weapons clearly. You only need to be reasonably sure what they are doing. And if a person is kneeling on the road in the middle of the night, digging a hole you can be reasonably sure that they are planting an IED. And you can be sure that in the future better IR imaging systems will hit the market so you can see more details.

      And since drones can stay up for a long time you can track the person as long it is necessary and choose your time of attack when nobody else is in the way.

      I’m not against drones in principle. I have my problems with the operators. Some of the videos I have seen disgust me, especially the unprofessional language of the drone pilots. When I hear orders like “smoke him”, it suggest to me that they are not judging the situation objectively. It sounds more like they are actively out to kill somebody. I think operators who are unable to separate their emotions from a situation need to be removed.

  • Pingback: yellow october


CLOSE | X

HIDE | X