Waymo’s Self-Driving Car Crashed Because its Human Driver Fell Asleep at the Wheel

John Krafcik, CEO of Waymo, introduces a Chrysler Pacifica hybrid outfitted with Waymo's own suite of sensors and radar, at the North American International Auto Show in Detroit.

John Krafcik, CEO of Waymo, introduces a Chrysler Pacifica hybrid outfitted with Waymo's own suite of sensors and radar, at the North American International Auto Show in Detroit. Paul Sancya/AP File Photo

The dozing driver didn’t respond to any of the vehicle’s warnings.

In June, one of Waymo’s self-driving Chrysler Pacifica minivans crashed on the freeway outside of the company’s office in Mountain View, California, after its lone safety driver fell asleep at the wheel.

Tech news site The Information, which first reported the crash, said the human driver manning the vehicle “appeared to doze off” after about an hour on the road, according to two people familiar with the matter. The safety driver unwittingly turned off the car’s self-driving software by touching the gas pedal. He failed to assume control of the steering wheel, and the Pacifica crashed into the highway median.

The dozing driver didn’t respond to any of the vehicle’s warnings, including a bell signaling the car was in manual mode and another audio alert, the Information reported. He regained alertness once the car crashed, then turned around and headed back to the Mountain View office. He no longer works for Waymo.

Waymo got lucky with the accident. The safety driver wasn’t hurt and no other vehicles were involved. Waymo reported the vehicle sustained “moderate damage to its tire and bumper.” The company told The Information in a statement that it is “constantly improving our best practices, including those for driver attentiveness, because the safe and responsible testing of our technology is integral to everything we do.”

Improvements in this case meant altering night-shift protocol to have two safety drivers instead of one, to guard against someone nodding off at the wheel. At a company meeting to discuss the incident, one attendee reportedly asked whether safety drivers were on the road too long, and was told that drivers can take a break whenever they need to.

Waymo is pursuing fully self-driving software that wouldn’t require any intervention from humans, in contrast to automakers like Tesla and General Motors, which have started with selectively automated features to assist human drivers. As Waymo has gotten closer to true autonomy, it has also tried to reduce its reliance on human safety drivers by, for example, cutting the number of safety drivers in a test vehicle to one from two. Waymo plans to launch a commercial ride-hail service with driverless cars in the Phoenix area this year.

After a self-driving Uber struck and killed a pedestrian in Tempe, Arizona in March, one point of focus was Uber’s safety-driver policies. Jalopnik pointed out that “almost everyone”—Toyota, Nissan, Ford’s Argo AI—uses two people to test self-driving cars. In the Uber Volvo that crashed, on the other hand, Rafaela Vasquez was a lone safety driver, at night. She was later found by police to be streaming The Voice on her phone at the time of impact.

One thing everyone working on driverless cars agrees on is that humans are bad drivers. People from Waymo CEO John Krafcik to disgraced former Uber engineer Anthony Levandowski—try finding a more diametrically opposed pair—like to talk about how driverless cars will save lives by eliminating thousands of preventable highway fatalities a year.

It is baffling, then, that these companies trust the very humans they seek to unseat to watch over their adolescent technology, alone and for hours on end. An autonomous safety driver once described to me working 10- to 11-hour shifts unaccompanied, including nights that began in the early evening and ended well past midnight. Drivers could take breaks whenever they wanted, this person said, but it was still a challenge to stay focused for that long without anyone to talk to, or much to do beyond watching the road.

A few months after the Tempe accident, Uber laid off most of its self-driving car operators in Pittsburgh and San Francisco. Uber said it would replace these people with “mission specialists” trained to monitor its cars on roads and on specialized test tracks. These mission specialists are supposed to be more involved in the actual development of the cars, tasked with tracking, documenting, and triaging any issues that might crop up. Per a current job listing, they should have “the ability to operate independently with little or no supervision.”

There is a great essay by reporter Tim Harford about how our quest to automate all things may be setting us up for disaster. The more we let computers fly planes, drive cars, operate machinery, and so on, the less time the people we’ve put in place for backup—pilots, safety drivers, and other operators—are able to practice their skills, and the greater the odds they’ll be unprepared in a true emergency. This problem is known as the paradox of automation, and it applies to benign problems as well, like how we struggle to remember phone numbers that are stored in our mobile devices, or to do mental math that we can punch into a calculator. Like any skill, these need to be practiced to be maintained, and become rusty with disuse. Instead of designing technology for humans to babysit, Harford wonders, why aren’t we making technology that babysits humans?