Google Moonshot Lab Cofounder Sebastian Thrun Talks Flying Cars, Automated Teaching, and an AI Arms Race With China

Flickr user JD Lasica

Thrun has a history of getting in front of trends.

Sebastian Thrun has a history of getting in front of trends. He’s known as one of the inventors of self-driving cars, helped launch Google’s [x] moonshot lab, co-founded online education startup Udacity, and has most recently gotten press for his work building flying cars for Larry Page (he’s eying early 2018 to put them on the general market).

This week, Udacity launched a new nanodegree: a course that teaches the technology used in building flying cars. Quartz sat down with Thrun to talk about flying cars, automated learning, and the United States’ standing in AI.

What’s the equation for you that says flying cars are ready to be built?

The biggest advance actually has been a combination of the advancement of drones related to the rapid improvement of motor controls, so nothing that surprising. The big surprise is battery technology. Look at the amount of energy it takes to stay in the air, there are these magic numbers, below which it cannot be done at all, and above which it can be done.

What drones showed to the world was that flight could be easy and safe. Before that, anyone who wanted to fly something, whether it be a helicopter or fixed-wing airplane, had to be trained hundreds of hours to fly, because there are so many failure modes. In drones, the most recent generation uses autonomous computer software to take over the hard part of flying, and gives the user the easy part. That’s the biggest innovation.

The race to build self-driving cars is partly fueled by a need for data. What kind of data is important for flying cars?

When you’re in the air, data is important… but it’s more sparse. When you come near the ground, to deliver packages with drones or land someone safely to pick up a passenger, we need to really understand how to interface with the ground and different structures. The most important thing I would say is power wires, because those tend to kill a lot of helicopter pilots, but also vegetation and trees and so on.

What we haven’t really explored in depth as a society yet is, what if you just have a city where you can land anyplace that’s clear on the ground? You’re going to wash things up as the airstream hits the ground. What will your neighbors say when you do this? What are the noise regulations? If Amazon is delivering 10,000 packages a day in San Francisco, I would bet there would be some consideration for the noise levels. Finally, any vehicle must have failure modes. There might be a safety parachute or what have you, but what is a safe way to fly?

Are you bullish on the idea of AI and education? Udacity seems to scale human teaching rather than letting algorithms take over.

Yes. We have brought more and more AI very quietly into our own education. We have AI analyze students to understand what their performance situation is, we have AI prompting our own actions toward students, [and] we now are beginning to use AI for grading. My general hypothesis is that almost any generally repetitive task, if you have enough data, an AI that looks over your shoulder can take over more and more of it. And it doesn’t have to be a binary switch, where we go today from human grading to computer-augmented grading. It’s perfectly sufficient to say, if this AI system in 90% of cases proposes to you what to write in the grading report, and the user has the chance to modify it and adapt it, the user has a great efficiency increase.

In our typical [Udacity] projects, there’s usually four, five, six fundamentally different ways to get something wrong. There are a hundred thousand ways to get the code wrong, but three or four principal errors you could make. We’ve successfully trained AI in our machine-learning curriculum to observe a submission form like this, observe the human grader, characterize what line of code might be wrong and why, and then learn by itself what line of code might be wrong and why. Then instead of sending it to the student, which we never do, we give it back to the human grader and say, “Human grader, this might be the root cause, can you please verify?” And the human grader might say that it’s completely wrong and ditch what the AI just did and do something better, or might just click and say accept, but in many cases there’s two effects: It makes the human grader more efficient as AI gets more and more right, but it also makes them better. Grader A can now learn from grader B, so if we have a new grader in the system the proposals he’s going to get as to what to say will be all learned from other more experienced graders. It has a huge impact on performance.

There’s a narrative building that we’re in an arms race against China in AI. What do you think about that?

First of all, I will say we are in a race, not just AI but in pretty much every technology from cell phones to laptops, even including drones. Secondly, none of these are zero-sum games; they’re all net positives. Imagine a world in which all repetitive work was done by machines. We would be so much more productive, and our goods could be so much cheaper. It would make education cheaper, housing cheaper, food cheaper, transportation cheaper, medical cheaper. The benefit is undeniably enormous, and if anyone denies this and says I can’t see this, I would ask people to pick up a book that describes what the world looked like 500 years ago or 300 years ago, before running water for example, and ask if that’s the standard of living you want today.

Having said this, China has announced [plans to put] AI research and development and technology at the forefront of the national agenda, which means there’s an encouragement to start corporations [and] there’s increased funding in universities. I sometimes am concerned, if colleagues like Elon Musk portray AI as a threat to humanity, for one it isn’t, but it might actually slow down progress in the United States by regulators trying to regulate something which hasn’t even yet unfolded its full power. So I would say the right response for the United States would to be to stay right where we are—we’re currently number one in AI—and to double down in investments the area.