Microsoft CEO Satya Nadella on Artificial Intelligence, Algorithmic Accountability, and What He Learned From Tay

Microsoft CEO Satya Nadella

Microsoft CEO Satya Nadella Elaine Thompson/AP

Microsoft has made a commitment to infuse AI throughout its business.

Microsoft is quickly pivoting to position itself as a leader in artificial intelligence. In his second keynote on the subject this year, CEO Satya Nadella Sept. 26 stood in front of IT professionals and reiterated the company’s commitment to infuse every part of Microsoft’s business, from cloud services to Microsoft Word, with some kind of AI. Quartz caught up with Nadella after he hopped off stage, to talk about the progress of his quest to make machines that assist humanity in a transparent way.

You started talking earlier in the year saying we need to create transparent machines, ethical machines, accountable machines. What has been done since then, what is concrete?

I think the first thing I’ve got to do is, even inside of Microsoft, is to lay out principles and invoke consciousness to developers, like we did with User Interface. Are we building machines that are helping humans, that are augmenting humans? [I] want to take any product that Microsoft builds and first ask, what is the augmentation, what is the empowerment? What is the way that we have done the training where we can be held accountable to when it comes to algorithms? That’s the concreteness, if you will. But I don’t want to think about this as like “OK, let’s keep a score card.”

It’s that design sensibility, in our developers, our designers, our product choices. Just as we would have talked about design principles for good user experience, what are the design principles for good AI? That, to me, is one of those fascinating problems—what does it mean to have algorithmic accountability when you’re training a deep neural net? It becomes that conscious set of choices we make around engineering, guiding principles that should help. So that’s the approach we’re taking.

When you’re working with things like image recognition and developers depend on your services, if something goes wrong, that can reflect poorly on them. Is there an openness that’s necessary for customers as well as end users?

We can’t know all the use cases. People will use these Cognitive APIs, whether it’s image recognition or speech recognition, for whatever it is they’re doing. We are not going to be the “censors,” or editors.

Let’s take image recognition. If our image recognition API by itself had some bias, because of lack of data or the way the feature selection happened or the way we set up the convolutional neural network that we designed, I fully think we’ve got to be accountable, like bugs we are accountable for. Because after all, for all of our talk of AI, ultimately human engineers are defining the parameters in which AI works. It’s not that we want to be perfect for everything every day, but if somebody finds something that is wrong, we’ll retrain it.

Moving onto chatbots, you can get a bot to generally understand what a customer is saying, but the generation of text is still a big research problem. How do we make chatbots sound more intelligent?

There’s multiple levels of this problem. Teaching computers human language is one of the ultimate quests. So to me, take the baby steps first. Before we even generate language, let us even understand the turn by turn dialogue. But the point ultimately, to have language generation, is an artificial general intelligence problem; it’s not an AI problem. You have to have an artificial general intelligence and a general learner that fully understands the semantics of everything that is there in the human knowledge and vocabulary.

Whenever you have ambiguity and errors, you need to think about how you put the human in the loop and escalate to the human to make choices. That to me is the art form of an AI product. If you have ambiguity and error rates, you have to be able to handle exception. But first you have to detect that exception, and luckily enough in AI you have confidence and probability and distribution, so you have to use all of those to get the human in the loop.

Take even customer support. We don’t think the virtual assistant is going to answer everything. It can escalate to the customer service rep itself, and the bot goes from being the front to the right-hand side, and the agent answers the questions, then the virtual agent learns from the agent because of the reinforcement learning. So that process is what will help us get better and better. But there needs to be breakthroughs in generalized learning in order for us to be able to do it.

How do you maintain interest in chatbots while people are working on these breakthroughs? How do you defeat hype?

That’s where product choice is. It’s a little bit of art, it’s a little bit of design, and it’s a bunch of AI capability, but that’s what we learn. I mean, even in Cortana, we solved a lot of hard problems and we’re realizing that a lot of people love to listen to jokes in Cortana. So we think, “Wow, that’s cool, let’s make it possible for them to do that.” It’s not just about the tech, but we have to find that golden loop between tech and design to be able to co-evolve.

Do you think that there’s a design so that every business can have a bot? Is it applicable to every business?

I think we’ll find out. I do believe that there are certain business and certain business processes, like, buying insurance is very bot-friendly, just by design. In fact, regulatory needs are such that when you’re doing insurance purchases it’s so much better than trying to navigate a mobile app or a website. So that’s at least one use-case. We are learning a ton from these developers.

When does securing AI against attacks or reverse-engineering become more of an issue?

It’s an issue now. One of my biggest learnings from [chatbot] Tay was that you need to build even AI that is resilient to attacks. It was fascinating to see what happened on Twitter, but for instance, we didn’t face the same thing in China. Just the social conversation in China is different, and if you put it in the US corpus it’s different. Then, of course, there was a concerted attack. Just like you build software today that is resilient to a DDOS attack, you need to be able to be resilient to a corpus attack, that tries to pollute the corpus so that you pick up the wrong thing in your AI learners. We are very much there trying to deal with those challenges.

We were [building Tay] as a prototype to learn. Right now, given the media cycle that we have, there is no distinction between a prototype and a released product. And of course, deliberately so; if you’re doing it in a public way, there’s no denying that’s going to be out there. On some level it was a shock, but at the same time, we were not saying that we wanted to launch something that’s going to be perfect. It’s not like Windows 10 launch, it’s something that was a research project, not a product effort. And it helped us, even with all of the “reaction.” Even independent of the reaction to us, it was a great call to even get more grounded in the design principles we talked about, having stronger algorithmic accountability, what does QA mean? What does Quality Assurance mean? Do you launch on a public corpus or do you launch in a different corpus first and watch? These are all techniques that we’re learning and improving on.