Skynet Goes Online in 3 … 2 …

There is absolutely no question about it: battlefield robots and drones save lives. That’s crucial.

At the same time, however, they can also take lives, and here we get into a morally muddy issue. Our government is just fine with drone assassinations of US citizens, without due process or even a presentation of the evidence. That’s so far gone on every level–moral, legal, ethical–that we really shouldn’t even be debating it. It’s not a power anyone should have.

The problem with the rise in remote-controlled, robot-driven warfare is that it can make war too sanitized, too safe. As Robert E. Lee said, “It is well that war is so terrible — lest we should grow too fond of it.” If we can deal death and destruction without risk to ourselves and our soldiers, don’t we become ever-more-tempted to do so?

In a long and interesting story from the AP, we get a glimpse of the future of robot warfare, and it’s not just human-piloted drones, but autonomous robots and vehicles.

Ten years of war in Afghanistan and Iraq have put a spotlight on the growing use of unmanned systems in the skies over the battlefield, from the high-flying Global Hawk to the lethal Predator aircraft and the hand-launched Raven.

But on the ground, thousands of small, remotely operated robots also have proven their value in dealing with roadside bombs, a lethal threat to U.S. troops in both wars. Of more than 6,000 robots deployed, about 750 have been destroyed in action, saving at least that many human lives, the Pentagon’s Robotics Systems Joint Program Office estimates.

Only now is robotics research nearing the stage that the military may soon be able to deploy large ground vehicles capable of performing tasks on their own with little human involvement. The results, among other things, could be more saved lives, less wear and tear on the troops, and reduced fuel consumption.

Full autonomy, engineers say, is still years away.

“The ground domain is much, much tougher than the air domain because it’s so dynamic,” said Myron Mills, who has worked on both aerial and ground robotic systems and now manages an autonomous vehicle program for Maryland-headquartered Lockheed Martin Corp.

Mills said autonomous ground systems face a series of challenges such as dust, fog and debris – as well as avoiding civilians and troops. A path may be passable one moment and littered with obstacles a half hour later due to battle damage.

“It’s just a very, very tough and chaotic environment,” Mills said. “The hardest thing to deal with has been figuring out how to make the system usable for the soldiers and be able to cope with the chaotic environment.

“Enough progress has been made that Lockheed’s Squad Mission Support System, a 5,000-pound (2,268 kg) vehicle designed to carry backpacks and other gear for overloaded foot soldiers, is now being tested in Afghanistan.

Wisconsin-based Oshkosh’s unmanned vehicle system, which would allow one person to control several heavy cargo trucks, has been assessed by U.S. Marine Corps drivers in the United States and is in the final stages of concept development.A four-legged walking robot designed to carry loads for combat foot patrols – the Legged Squad Support System, or LS3 – is due to undergo testing and assessment with troops toward the end of the year, developers at Massachusetts-based Boston Dynamics said.

Read the whole thing. Right now, these advanced units are being designed for supply, but how long will it be before they’re designed for combat? The Star Trek episode “A Taste of Armageddon” imagined a society where wars were fought by simulation, with casualties calculated and people showing up to disintegration booths to die nice, clean deaths while preserving the infrastructure. They’d removed the ugliness and destruction from war, and thus became less inclined to do the hard work towards peace.

Please don’t misunderstand me: anything that helps protect the lives of soldiers is good. But we need to guard against the possibility of slipping into a cavalier attitude about warfare. We already appear to be there with the use of assassination drones. (And I really can’t even believe I’m typing those words. The United States of America has assassination drones! It boggles the mind.)We don’t do that. We don’t do it because if we’re not better than our enemies, our fight means nothing.

Dog Gets 3D Printed Legs
Police Targeting Waze
My Favorite Game of 2014 Was ...
Have I Mentioned Lately How Much I Hate Robots?
About Thomas L. McDonald

Thomas L. McDonald writes about technology, theology, history, games, and shiny things. Details of his rather uneventful life as a professional writer and magazine editor can be found in the About tab.

  • Noah D

    Taking the man out of the loop for shoot/no-shoot decisions isn’t going to happen – and if it does, it’s a very good warning sign that we’ve really lost our minds. Even systems like C-RAM (installation defense against rockets & mortars) and Aegis (shipborne tracking/fire control/weapon selection) *could* be fully autonomous, but they’re not. Having a human hand on the shoot/no-shoot button is paramount, as experience with autonomous systems has shown that Murphy loves them very much, and graces them with all kinds of glitches. See ‘Sgt. York DIVADS’ and ‘C-RAM sees helicopter rotor tips as incoming mortars’. Also, from what I’ve read, there’s a profound unease among the ranks, high and low, with letting a robot/program decide whether or not to kill a human being. Soldiers don’t tend to trust such things, and they get mysteriously broken.

    As for the use of Predator-types, and other attack drones, they’re not ‘assassination drones’, any more than my scoped deer rifle is an ‘assassination rifle’. It’s a combat drone, it’s the controllers who use it for assassinations.

  • Noe

    I wonder what the psychological toll is on people who do remote slaying 9-5, then go home to suburbia, the wife and kids. Life and its taking, reduced to ‘video gaming’? Are you actually going to post on skynet in the future?

  • lethargic
  • Thomas L. McDonald

    I usually just use “skynet” as shorthand for referring to potentially frighting military applications of technology, particularly autonomous systems.

  • Thomas L. McDonald

    My concern is: how long will the human remain in the loop? We know where we are now, and where we may be in 10 or even 20 years. But further down the line, as drones and bots become more ubiquitous, is it so unthinkable that we would send out fully autonomous combat drones? Our moral sense is eroded day by day. I see things now that I never dreams of seeing 10 or 20 years ago. I do worry that our use of technology in warfare will not always be guided by a strong moral sensibility.

    As for the use of the term “assassination drones,” I was trying to make a point about the way the line between what a thing is (a simple piece of amoral hardware) and what it does (an immoral act of extra-judicial assassination) becomes blurred.

  • victor

    Hello?! 750 out of 6000 deployed Robots have been destroyed?! Talk about burying the lead! That’s greater than a 12% casualty rate. And that’s acceptible? How long before we start adopting this sort of reductive functional view towards our non-Robotic servicemen and women? And don’t tell me the Robots are fine with it, either. Methinks it’s time again for some Robot Love.

    If you need any help getting started, here’s an old film about some simple ways you can love your Robot. And remember: February 7th is Love Your Robot Day. Stop the madness!

  • Thomas L. McDonald

    Victor! I did not know you were a man of many talents. Very cool.

  • David Naas

    Regretably, even human-controlled drones might be worse than Terminator T-1000 models. (“Skynet”!) Research going way back proves beyong any comfortable shadow of doubt that humans will torture their fellow humans to death, if ordered to do so, even if the person ordering has absolutely no formal authority.
    The Milgram Experiment (duplicated “successfully” many times since.
    See why I am more afraid of humans than robots?

  • victor

    Thanks! :-) Mainly, I just want there to be some sort of electronic papertrail so that when Skynet DOES go active, I can show it to the Robots and say “Hey! I tried!”

  • Ron19

    How long will the human remain in the loop?
    Only until the button is pushed.
    Once launched, cruise missiles, for instance, can travel for several hours before detonation. They may have some kind of recall or command destruct mechanism nowadays, but I haven’t heard of it. They did not have that as an original design intent. Same for ICBMs.
    The big push about globally outlawing land mines is that they can remain active for years, long after a conflict is resolved. And human suicide bombers are looking for maximum bang-for-the-buck, not for a surgical strike.