What Happens When a Car is Hackable? Science Agency Spends $1M to Find Out

 A driving telematics simulator is demonstrated at the University of Michigan Transportation Research Institute in Ann Arbor, Mich.

A driving telematics simulator is demonstrated at the University of Michigan Transportation Research Institute in Ann Arbor, Mich. Tony Ding/AP

The many ways hackers could disrupt autonomous vehicles.

Imagine this future scenario: Self-driving cars form an orderly procession down a highway, traveling at precisely the right following distance and speed. All the on-board computers cooperate and all the vehicles travel reach their destinations safely. 

But what if one person jailbreaks her car, and tells her AI driver to go just a little faster than the other cars? As the aggressive car moves up on the other vehicles, their safety mechanisms kick in and they change lanes to get out of the way. It might make the overall efficiency of the transportation lower, but this one person would get ahead. 

This is but one of many scenarios that Ryan Gerdes of Utah State University is exploring with a $1.2 million grant from the National Science Foundation to look at the security of the autonomous vehicle future. 

"The designers of these systems essentially believe that all of the nodes or vehicles in the system want to cooperate, that they have the same goals," Gerdes said. "What happens if you don't follow the rules? In the academic theory that’s built up to prove things about this system, this hasn’t been considered."

While Google is out to create a fully autonomous vehicle some years into the future, the major carmakers are taking more incremental steps toward autonomy. Nissan, Volkswagen, Daimler and others all have programs. Just this week, Cadillac announced that it would include "super cruise" that would allow for "hands-free" driving on highways in a 2017 car.

The race to come out with self-driving technologies has drawn in regulators in several states, but it's hard to evaluate the claims of the carmakers or anyone else without independent analysis about the vehicles. 

All the autonomous vehicle makers have downplayed security concerns. Chris Urmson, Google's self-driving car project lead, provided a reasonable, but largely boilerplate answer to a security question at an event earlier this year. "There is no silver bullet for security and we're taking a multilayered approach," Urmson said. "Obviously there is encryption and very narrow interfaces or no interfaces at all. You do your best to make your outside layer secure and then make your inside layer more secure."

To translate: Urmson is saying that they don't want hackers to get into any of the car's systems (the outer layer), but they also don't assume that no one will ever get in. So, the access to the controls of the car would be further quarantined from the other networked components that someone might gain access to. 

But a straight up hacking is not the only kind of threat that Gerdes is studying with his NSF grant money. "If you just look at at traditional threats to a computer, you’re going to miss out on a lot bigger threats," he said. 

What he's fascinated by is the way that bad actors could use the self-driving cars' algorithms against themselves. The algorithms that guide these cars—at least now—are fairly "deterministic" as he put it. A given set of inputs will yield the same outputs over and over. That makes them prone to manipulation by someone with knowledge of how they work. He can spin out scenario after scenario:

  • "What happens when you have two advanced cruise control vehicles and the one in front starts accelerating and breaking such that the one behind it starts doing the same thing in a more amplified fashion?"
  • "We’re looking at the collision avoidance systems. They rely on radar. We think we can manipulate radar sensors to some extent. Is it simple for an attacker to create an obstacle out of thin air?"
  • "Auto manufacturers always maintain the proper spacing in adaptive cruise control. You might get interesting effects if [someone] crafted certain inputs or misbehaved in a certain way so they create a very large traffic jam."
  • "If I’m a shipping company and I want to slow down the competition... I can take advantage of their sensors and keep making their cars brake and accelerate. We’ve already demonstrated in theory that it’s possible."

In all of these circumstances, they're trying to understand how the algorithms that guide autonomous vehicles could be exploited by hackers or other bad actors. They don't have access to the self-driving cars that car makers are working on, so to test their ideas in the field, they're using BattleBots to stand in for full-size cars and trucks. They program the BattleBots using algorithmic logic that they imagine the car companies are using based on published academic literature.

Because of the way the car companies work—building their specialized systems with components from large suppliers like Bosch—Gerdes' team can often get the core parts that make up the self-driving car systems. 

"Experiments are really hard in this realm, but we think we have a decent analog," Gerdes told me. "We can accelerate a lot faster than most cars and they are also made for battle, so we can crash them together."

Obviously, everyone building autonomous vehicles has a major incentive to get the security issues right. But so do credit card companies and Target and Apple—and they have all experienced major problems with security over the last few years. And, Gerdes said, the traditional car companies have not inspired confidence in the security research community with some of their designs.

A 2010 paper found all kinds of security flaws in a modern automobile, including headslappingly simple stuff like allowing the car's control system to be accessed through the radio controller. Install a hackable aftermarket radio and some malicious entity could take control of one's brakes.

"Why would you design a car to work like that?" Gerdes asked. "And these are the same people who are going to be making our automated vehicles?"