One Congressman’s Crusade to Save the World From Killer Robots

Ociacia/Shutterstock.com

Robots could one day keep troops out of combat, but opponents say deploying them will have dangerous consequences.

If a robot soldier commits a war crime, who is held accountable?

You can't punish a collection of parts and coding algorithms. But can you blame a human commander, who gave a legal order only to see the robot carry it out incorrectly? And what about the defense manufacturers, which are often immune from the kind of lawsuits that would plague civilian outfits if their products cost lives.

The culpability question is one of a host of thorny moral dilemmas presented by lethal robots. On the one hand, if effective, robot soldiers could replace ground troops and prevent thousands of American casualties. And robots aren't susceptible to many of the weaknesses that plague humans: exhaustion, sickness, infection, emotion, indecision.

But even if robot warriors can keep American lives out of danger, can they be trusted with the complicated combat decisions now left to human judgment?

Rep. Jim McGovern thinks not.

The Massachusetts Democrat is part of a crusade for an international ban on killer robots—machines that can decide without human input whom to target and when to use force.

The only way to stop killer robots, said McGovern and a series of panelists he assembled for a Capitol Hill briefing this week, is to ban them before they even exist. Much like drones, once someone gets a killer robot, it's only a matter of time before everyone else is racing to catch up. And despite some countries' commitment to evaluating the technology responsibly, good intentions never won an arms race.

"The only thing harder than getting a ban in place is getting a ban in place after something is developed," McGovern said.

McGovern is racing technology, but he believes he has time: He thinks it will take another two to three decades before the technology would be available.

McGovern's Tuesday panel is part of an ongoing effort by anti-robot activists to raise awareness about the issue. They hope lawmakers will share their concerns and join their push for a worldwide ban. "The U.S. should show leadership on this," said the Human Rights Watch's Steve Goose. "If the U.S. were able to get out in front … it would lead the way for many other nations."

So why is it so important that robots never see the battlefield? For some of the panelists, the issue is a moral one. "Do we really want to establish a precedent where it's OK for machines to take the lives of human beings?" said Dr. Peter Asaro, a founder of the International Committee for Robot Arms Control.

For most, though, the chief worry is judgment, and humans' innate ability to read context. "Soldiers have to rely on intention or subtle clues," said Bonnie Docherty, an arms expert at Human Rights Watch and a lecturer at Harvard Law. "We have serious concerns that a fully autonomous weapon could ever reach that level."

Especially in battlefields where soldiers aren't always wearing distinguishing uniforms, the ability to recognize actions from other humans becomes important. Even in cases where a robot can tell friend from foe, it might have trouble recognizing if the enemy is surrendering or is wounded.

Media depictions like Terminator have anthropomorphized warrior robots, which "implies a level of cognitive ability that these machines do not have," said Paul Scharre, who has worked on the Defense Department's autonomous-weapon policies. "Images from science fiction are not very accurate or very helpful."

Killer robots won't look like humans, and they probably won't act like them either. "What [robots] really lack is a meaningful understanding of context and situation," Asaro said. "It's hard to believe that a machine could be making those kinds of meaningful choices about life and death."

Other concerns include the possibility of malicious hackers taking over a robot army. And then there's the possibility of a "flash war" starting over a mistake. If one robot malfunctions and fires, robots on the other side could return fire automatically, starting a conflict at the speed of circuitry before a human could intervene.

The arms-race worry is very real, Asaro said. Unlike nuclear weapons, which require extreme technical sophistication, killer robots won't be hard to replicate. "Once these technologies are in existence, they'll proliferate widely," he said. "There are even software systems that could be implemented through the Internet."

Despite all these concerns, robot advocates say the rush to ban the technology outright is ill-conceived. While preaching caution on development, they also say it's important to test the systems' limits before crafting policy.

They fear a ban based on imaginations of an android toting a machine gun could interfere with lifesaving technologies like rapid-response air-defense missiles. And while context recognition remains a huge challenge, advocates say it's at least worth exploring whether robot warriors can actually reduce civilian casualties in some circumstances.

There's also the challenge of enforcement. Even if a ban were enacted, it would be hard to tell if a drone fired a missile on its own or some other weapons system was operating under the commands of a human or an algorithm.

It's not that anyone has killer-robot plans just yet. In fact, the panelists agreed the U.S. has been thoughtful and responsible in approaching the issue. The Defense Department even issued a policy statement on the technology in late 2012 that established a five- to 10-year moratorium on developing killer robots.

But an American stand-alone policy might not be enough. According to Scharre, at least 23 countries have joined the race to build armed drones. It's not hard to imagine a similar push to build machines that could replace combat soldiers—with or without U.S. involvement.

Meanwhile, the issue will get more and more tricky. We won't make the jump from a flesh-and-blood soldier to a T-1000, but some combat systems could gradually phase in more and more autonomy.

Some robots will have in-the-loop systems, where human operators monitor actions and can override at any point. The longer-term prospect is an out-of-the-loop robot, one that carries out missions with minimal supervision and no possibility for human control.

Panelists agreed that the best chance for a ban will probably come wrapped in language other than "robot ban." They hope to persuade countries to agree to something in more positive language—that their autonomous weapons will have a human operator monitoring them and ready to take over at any time.

Regardless of just what is allowed, it's important that militaries know where to draw the line before they have the technology to build killer robots. A treaty "frees up weapons developers to know what they're allowed to do," Scharre said.

As robots get more complex—and better able to read and respond to human cues—it's likely some advocates will argue they deserve a more prominent place in combat. But for McGovern and his allies, such weapons would have to meet a challenge they now deem impossible: Can you build a robot not only with a brain but with a soul?

(Image via Ociacia/Shutterstock.com)