What Keeps Pentagon’s Top Robotics Expert Up at Night?

Robot Running Man from the Florida Institute for Human and Machine Cognition proceeds to turn a valve in the U.S. Defense Advanced Research Projects Agency Robotics Challenge in Pomona, Calif.

Robot Running Man from the Florida Institute for Human and Machine Cognition proceeds to turn a valve in the U.S. Defense Advanced Research Projects Agency Robotics Challenge in Pomona, Calif. Alex Gallardo/AP

Hint: It's not killer robots.

Gill Pratt might be called the military’s top mind in robotics. As the program manager for the DARPA Robotics Challenge, or DRC, Pratt managed an international competition that brought 24 teams to a simulated disaster zone in California, complete with rubble and disrupted communications, to explore how humans and robots could work together in emergencies like the Fukushima Diachii nuclear meltdown.

Pratt, who is stepping down at the end of August, sat down with Defense One to discuss the Robotics Challenge, the future of human-robot interaction, and how to stop a robot uprising (he’s not losing sleep over it). Below is that exchange, edited for clarity.

Defense One: Leading a competition to build emergency robots has surely taught you a lot about how these machines work and what happens when they don’t. What did you learn?

Gill Pratt: The No. 1 issue in emergencies is communication and coordination. Radio communications in a typical disaster is just awful. First, the cell network is overloaded because everyone is trying to talk and get in touch with each other. Second, the infrastructure has been degraded. Third, Sometimes the physical nature of the place where the disaster occurred can be very bad for wireless comms. In Fukushima, there was shielding inside the reactor buildings - zinc plates - and rebar to keep radiation from getting out. The shielding prevented wireless signals from getting out, also.

Communications, Command, and Control is the hard part in an emergency where you’ve got hundreds of people trying to help. In the future, if the emergency is very bad and the environment is dangerous for human beings, some of the assets we bring to bear won’t be flesh and blood, but machines. How do we integrate them into the general response even if communications is degraded? That’s the question we were trying to get at in the DRC it’s great to have bots that can do the work of humans in a dangerous place but if the comms are bad, how do you get them to work as a team?

Defense One: Was there a particular moment of satisfaction or victory?

Gill Pratt: It was when the Carnegie Mellon machine and theKAIST machine performed at reasonable speeds right after they entered the building. KAIST put out a sped-up clip showing their whole run. What you see is that they don’t slow down. You can’t even tell that comms. are  out most of the time. [KAIST] goes and drills a perfect circle in the wall. That’s all being done locally by the machine itself.

How did it do that? First, there’s autonomy in the robot that can do actions at the task level. If you give it the task — “cut the hole in the wall” — it can do that without talking back and forth to you. Second, there’s simulation software on the operator control station, which is showing the humans in the safe zone what’s going on the other side. Because I know the robot’s software is running and I know where things were the last time we got to talk, I can predict where the robot is now. From the human point of view, it’s as if the comm. link is good even when it isn’t. It showed that you can give a human being in a safe zone the illusion that they are actually watching what’s going on on the robot side, so when the comms. come back the human can make the decision about what to do next in a seamless way.

Defense One: It seems that emergency robots will work best in a controlled environment where you’ve mapped the environment for them beforehand.

Gill Pratt: There’s a slope. If you are in a totally unstructured environment, figuring things out is very hard. In a structured environment like a lab, you know exactly where things are, so it’s very easy. Right now, we’re in a stage where robotics is good at things between those two extremes. In cases where we’re trying to mitigate the extent of the disaster the typical setup is a structure that humans have built. There are blueprints, or humans have done a fly through with a UAV. There’s a bit of a map. Is it a perfect map? No. The robot must adapt to register its own coordinates with the map and with what it expects to be there. It must adapt to imperfections. And the disaster itself may have had an effect on the building, but it’s not totally unstructured data.

In Fukushima, the stairs had been worn down by people walking up and down over many years. The blueprints and the maps helped to some extent, but you need to bridge the rest of the way, with feedback from the robot itself to adapt to imperfections. We’ve tried to do the same thing: give the teams an idea, roughly, of where things would be.

Defense One: Before the teams brought their robots to theDRC finals, DARPA held a virtual event — a simulation to evaluate the teams’ software. Were you afraid someone would cheat by introducing malware into the cloud environment?

Gill Pratt: We had no reason to believe the teams were cheating, but we took steps to prevent it anyway. The tool we came up with, used in the virtual robotics challenge and developed by the Open Source Robotics Foundation, was a simulation engine run on the cloud. That means it can be run by anyone in the world. That was very, very new. Every simulation that we had was in a closed world. We built these walls around the teams so we never had one team and a second team on the same virtual stage at the same time. If a team were able to introduce any code, good or bad, only their score would be affected, so they had no reason to put anything bad in.

What that meant is that they couldn’t interfere with each other. In a real emergency, you wouldn’t want that. The idea is to have lots of people working cooperatively. You would have to trust a group of machines and humans to work together.

The question is: what if one of the people in the group is bad? I don’t think it’s a technological issue so much as a human security issue. I’m not an expert in the psychology of disasters but I know that they can attract people who’s “help” you don’t want to have. You need to filter out who is official and who is not. It’s much more difficult to defend against an insider threat.

Defense One: Do you worry that future consumer robotics products won’t have the sort of software security that’s typical of machines for the military? And if future robotic intelligence and processing is cloud-based, then could malware uploaded to one robot affect an entire product line of commercial robots, causing breakdowns or worse?

Gill Pratt: I worry about it for every system out there, whether it’s civilian or defense. We have a yet-unsolved problem with security. The great advantage of having robots that can communicate with each other, and having the intelligence that they use be on the cloud, is that they can share things. When one machine figures out how to grab something, suddenly all of them know how to do it, too. That’s the upside.

The downside is that when the machine is infected because of malware that came in, then they all are. That can be awful. I know that DARPA has looked at how to fix this, how to break the effects of software monoculture so that all of the systems aren’t the same. I worry about it a great deal, but the robots that we’ve been developing in the DRC can’t be made dangerous because of malware. We’re talking about lab prototypes that we’re using to understand this comms issue – their physical power is actually very low.

In terms of when you would deploy this thing, it’s still many, many years off. First the cost has to come down, the effectiveness has to come up, and the reliability has to come up. Along the way — you’re absolutely right the security of the network has to be figured out. At the DRC, we were just trying to see how effectively man and machine can talk to each other even when the communication line keeps cutting out.

Defense One: If you aren’t worried about robots being hijacked to become dangerous, what’s your biggest security concern when it comes to future robots?

Gill Pratt: How do we protect the information that the robot picks up, not even during a disaster, but in other parts of life? Robots get to see. I would love to have a machine help me when I grow old — but I don’t want all the information, all that the robot is watching, to be made public. How do we protect against that? I don’t know. On the military side of things, if the machines are helping our forces do what they’re doing, how do we make sure that no one is watching? These are serious questions, but they aren’t specific to the robotics field. They’re specific to IT.

This is an issue that keeps coming up, and you don’t need a robot for it to become an extremely difficult issue. People have this notion that robots are dangerous because they have legs, so perhaps they can come get us. The danger is not in the legs. It’s in the camera and the microphone. We’re the robot. We carry cell phones around all over the place to the worst places they can be and we trust whoever it is to audit the software to make sure it has no malware in it. There’s a lot of trust there. I’m very worried about those systems. I don’t worry about the robot on the loose doing physical damage. The valuable stuff is the data. That issue is huge and transcends whether it’s a robot, a cellphone, or a laptop. If you solve it for the laptop and phone, you’ll solve it for the robot as well.

Defense One: You’ve emphasized that neither you nor the Defense Department is trying to build armed ground robots through this competition. But other nations don’t share those reservations. Russia has deployed armed robots that can fire autonomously to guard missile sites. If the Russian government, hypothetically, were to approach you for advice on how to make that autonomous armed ground robot system work, what would you tell them?

Gill Pratt: For a number of years to come, for situations of human time scales, the decision of whether to fire or not is one that a human will be best at. The chance of making a military situation much worse by having a machine autonomously make a mistake outweighs the benefit.

There are some cases where you have no choice. For instance, the Phalanx [Close-In Weapon System] gun that’s on ships that shoots at incoming missiles has been around for decades and it’s an armed, robotic system. It has to work beyond human speeds. The 2012 DOD directive on autonomy talks about that. The key is quality control, making sure that that machine can’t make an error. The Phalanx looks at the size and speed of the object coming toward it and if it determines that the object coming is so fast and so small it couldn’t possibly have a person in it, it shoots.

In those systems where we do have to arm an autonomous system — we have them now and I’m sure we will have them in the future — we must ensure good quality control over the target-detection software to make sure the system never makes an error. The U.S. will likely do that best. I worry more about the nation that doesn’t care as much as the U.S. does if errors are made.

We should also keep in mind that human beings make a lot of mistakes in war when they fear for their lives. There’s reason to think these systems can make things better. They can make things better by deciding when NOT to shoot.

Defense One: Earlier in the summer, several noted technologists, including Elon Musk and Stephen Hawking, signed a letter urging governments to ban the development of autonomous weapons. Defense One ran an essay by one of the letter’s co-authors as well as one by a critic of the proposed ban. What’s your position on the letter?

Gill Pratt: I believe that now is the wrong time to be making decisions like this. Having the discussion is fine. But saying, “No, we’re not going to work on this” is wrong. First, we need to understand what’s possible. We can make a choice not to use what we develop -  we have made choices like that with bio-weapons, for example. We made a choice to ban them. In the case of lethal autonomy, we need to learn a whole lot more and there’s a whole of good that they can do, too, in stopping lethal errors from happening. I would like to see where we can get to with that. There are also whole lot of reasons why a ban is impractical right now. To call for one now based on an emotional fear of a far future thing, this is the wrong time to do that.

Defense One: You, clearly, are not worried about a robot uprising.

Gill Pratt: I’m extremely optimistic about the ability of robots to help us. I would like a machine to help me drive my car; I would like a machine to help me carry my backpack on a hike. There are lots of ways for these machines to make life better. My outlook is positive. I recently wrote an article titled “Is a Cambrian Explosion Coming for Robotics?” in The Journal of Economic Perspectives. It represents my views and not those of DARPA or the Department of Defense. It talks about some of these issues.

The Cambrian explosion was a period 540 million years ago when life became very diverse on earth. One of the leading theories as to why is that the Cambrian is when eyes were first developed. Our post-Cambrian ancestors could see mates and threats at a distance. Life just took off. I think we’re reaching the same point in robotics because, for the first time, these machines can see and understand their environment. At theDRC you began to see a little of that in machines that could do some of the work on their own using vision. We are at a tipping point. But it’s very, very hard to figure out exactly when we will actually tip. I think that when it occurs it will be fast because it will build on itself.

We thought that people’s responses to the robots competing the DRC would be fearful: “Oh it looks like a Terminator, I should be scared of it.” We discovered this other effect. The effect was extreme. When the robot fell down, people went, “Oh my god!” At one point, the MIT bot fell down and a woman at the press briefing asked, almost with tears in her eyes, “Do you know if the MIT robot is OK?” I don’t quite understand it but I suspect that the bond between people and robots will be very strong.