recommended reading

A Tesla Fatality and the Future of Self-Driving Cars

Tesla employees work on a Model S cars in the Tesla factory in Fremont, Calif.

Tesla employees work on a Model S cars in the Tesla factory in Fremont, Calif. // Jeff Chiu/AP

Federal officials are investigating a crash that killed the driver of a Model S, a Tesla vehicle with a partially autonomous driving system, in a move that has major implications for the future of driverless vehicles.

“This is the first known fatality in just over 130 million miles where Autopilot was activated …” Tesla wrote in a statement Thursday. “It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations.”

The investigation may be standard procedure, but it’s also certain to influence the ongoing conversation about the safety of self-driving vehicles.

The Model S isn’t technically a driverless car, but Tesla has been a vocal player in the race to bring truly driverless cars to market. The company’s Autopilot feature is an assistive technology, meaning that drivers are instructed to keep their hands on the wheel while using it—even though it is sophisticated enough to complete tasks like merging onto the highway. It wasn’t clear from Tesla’s statement how engaged the driver was at the time of the crash.

“What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S,” Tesla said. “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”

Autopilot is still in beta mode, so drivers who use it have agreed to test the technology for Tesla and transmit data about its use back to the company. Tesla has repeatedly emphasized that Autopilot requires drivers to stay as focused as they would if they were driving as usual. But that hasn’t stopped people from viewing Autopilot as a stepping stone to a self-driving near-future—or more.

When the feature was first introduced last fall, it didn’t take long before people began uploading YouTube videos of themselves pushing the Model S beyond its intended level of autonomy. Some drivers sat with their hands far from the wheel. In one video, a man held up a newspaper between himself and the windshield as the car essentially drove itself.

“They’re not all being insanely stupid,” John Leonard, an engineering professor at the Massachusetts Institute of Technology, told me at the time. “But some of these people are totally reckless.”

Plenty of roboticists chalk up these stunts to human nature, but they pose a real quandary to engineers who are building driverless systems to be safer than existing human-driven vehicles.

“I think people just exhibit unsafe behaviors period, right?” said Missy Cummings, the head of Duke’s Robotics Lab, when I met with her at the university last month. “We have seen—and Google has their own films of it—what people will do to a car if they think it is driverless. There’s a gamesmanship.”

Experts have long said the first death involving a driverless or partially autonomous car was only a matter of time. After all, there are more than 30,000 traffic fatalities every year in the United States alone. The fact that Google’s driverless fleet has logged more than 1.5 million miles in fully-autonomous mode and caused just one minor accident along the way is remarkable. But even the biggest advocates for driverless technologies say a perfect track-record on safety isn’t sustainable.

What remains to be seen is how people will react—both culturally and from a regulatory standpoint—to driverless-car deaths when they occur.

So far, the self-driving car industry—which includes Tesla, Google and several existing automakers—have resisted establishing universal safety standards. Their testing data is also proprietary. It’s possible that, in the wake of Tesla’a fatality, lawmakers will revisit the possibility that such standards should be drawn up by the government, which could force more public scrutiny of the technology.

“I think if they would come together and set their own standards, that would be very beneficial to them—instead of having federal oversight,” Cummings said. “I have no confidence that the U.S. government can put together what I would say would be good safety standards. But I do think industry can.”

Threatwatch Alert

Thousands of cyber attacks occur each day

See the latest threats

JOIN THE DISCUSSION

Close [ x ] More from Nextgov