More on why you should be worried about autonomous killer robots

I just stumbled across a great Popular Science article titled, “The Terminator Scenario,” which gives a good explanation of why we should be worried about the coming of autonomous military robots. Key quotes (emphasis mine):

I asked that question of Werner Dahm, the chief scientist of the Air Force and the lead author on “Technology Horizons.” He dismissed as fanciful the kind of Hollywood-bred fears that informed news stories about the Navy Fire Scout incident. “The biggest danger is not the Terminator scenario everyone imagines, the machines taking over—that’s not how things fail,” Dahm said. His real fear was that we would build powerful military systems that would “take over the large key functions that are done exclusively by humans” and then discover too late that the machines simply aren’t up to the task. “We blink,” he said, “and 10 years later we find out the technology wasn’t far enough along.”

Dahm’s vision, however, suggests another “Terminator scenario,” one more plausible and not without menace. Over the course of dozens of interviews with military officials, robot designers and technology ethicists, I came to understand that we are at work on not one but two major projects, the first to give machines ever greater intelligence and autonomy, and the second to maintain control of those machines. Dahm was worried about the success of the former, but we should be at least as concerned about the failure of the latter. If we make smart machines without equally smart control systems, we face a scenario in which some day, by way of a thousand well-intentioned decisions, each one seemingly sound, the machines do in fact take over all the “key functions” that once were our domain. Then “we blink” and find that the world is one we no longer are able to comprehend or control.

Placing such faith in our military machines may be tempting, but it is not always wise. In 1988 a Navy cruiser patrolling the Persian Gulf shot down an Iranian passenger plane, killing all 290 aboard, when its automated radar system mistook the aircraft for a much smaller fighter jet, and the ship’s crew trusted the computer more than other conflicting data. Several scientists at Wright-Patterson mentioned learning from such classic examples of over-reliance on machines, which even in civilian aviation has led to fatal accidents. As often as not, such military machine mishaps are the result of what a vice president at a robotics company described to P.W. Singer as “oops moments,” the kinds of not-uncommon mistakes that occur with the technology at such an early stage of development. In 2007, when the first batch of the armed tank-like robots called SWORDS were deployed to Iraq—and then quickly pulled from action—a story spread about one aiming its guns on friendly forces.

The robot’s manufacturer later confirmed that there were several malfunctions but insisted that no personnel were ever endangered. A C-RAM in Iraq did target a U.S. helicopter, identifying it incorrectly as incoming rocket fire; why it held its own fire remains unclear. A soldier back from duty in 2006 told Singer that a ground robot he operated in Iraq would sometimes “drive off the road, come back at you, spin around, stuff like that.” That same year, Singer says, a SWORDS inexplicably began whirling around during a demonstration for executives; a scene out of the movie Robocop was avoided because the robot’s machine gun wasn’t loaded. But at a crowded South African army training exercise in 2007, an automated anti-aircraft cannon seemed to jam and then began to swivel wildly, firing all of its 500 auto-loading rounds. Nine soldiers were killed and 14 seriously wounded.

It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”

The precursors to such systems—the remote-control machines that perform many operations the military calls too “dull, dirty, and dangerous” for humans—are in wide use, and not without consequence. CIA drone crews unable to adequately discriminate between combatants and noncombatants have so far killed as many as 1,000 Pakistani civilians. And truly autonomous systems are taking on increasingly sensitive tasks. “I don’t know that we can ignore the Terminator risk,” Lin says, and as an example suggests not some killer drone but rather the computers that even now control many of our business operations. Last spring, for instance, one trading firm’s “sell algorithm” managed to trigger a sudden 1,000-point “flash crash” in the stock market: “It’s not that big of a stretch to think that much of our lives, from business systems to military systems, are going to be run by computers that will process information faster than we can and are going to do things like crash the stock market or potentially launch wars. The Terminator scenario is not entirely ridiculous in the long term.”

One last bit: I was going to make a joke about how the reassuring takeaway is that the military is working on these issues, and the not-so-reassuring takeaway is also that the military is working on these issues. Then I got to the end of the article…

Wendell Wallach emphasizes the “tremendous confusion out there about how autonomous robots will get. We have everything from ‘We’re only two decades away from human-level AI’ to people who think we’re 100 years away or may never get there.” Even at the Air Force lab, where I was shown the cutting-edge work in the field, I left without a clear sense of what we might expect in the years ahead. Siva Banda, the head of control theory there, told me the Air Force understood well the standards and specs required to build a manned aircraft. “But our knowledge when it comes to UAVs—we’re like infants. We’re babies.” Indeed, it’s not clear that anyone is taking the lead on the matter of military machines. After P.W. Singer recently briefed Pentagon officials on his Wired for War, a senior defense-department strategy expert said he found the talk fascinating and had a question: “Who’s developing and wrestling with the strategy for all of this?” Singer, who has now given many such talks, explained to the official: “Everyone else thinks it’s you.”

I am very smart and it isn't fair (to other people)
The ignorance and dishonesty of Christian apologetics, part 1: anti-evolutionism
So I've been flipping through The Transhumanist Reader...
No scientific evidence for that

CLOSE | X

HIDE | X