AI Needs to Earn Our Trust, Just Like Any Human Relationship

Jirsak/Shutterstock.com

The entire premise behind machine learning is that it learns from failure or success.

It’s awkward to correct a stranger when they’re wrong. How did you feel when Miguel from IT, whom you’ve only met once, told you that “learnings” wasn’t a real word? Or when Robin lectured you about the correct pronunciation of “macaron” at the office holiday party? Direct feedback that teaches rather than chides requires trust, and trust takes time.

Unlike humans, AI craves to know when it’s wrong. The entire premise behind machine learning is that it learns from failure or success. For example, we may feel that weather algorithms always get it wrong, but they can only improve by comparing their forecast to what actually happened. It’s therefore our job to tell the system when it’s wrong and acknowledge when it gets it right—but that requires a level of trust.

We can often tell if our friends or partners are unhappy with our actions, even if they don’t directly tell us. We can read their facial expressions, subtle changes in their tone of voice, and body language. Machines, however, don’t have that kind of intuition. Until we’re all walking around with some kind of lightweight EEG machine strapped to our head, machines will need to find proxies for how to sense human emotions, desires, and preferences. The obvious way to do this is to just to ask us, but that is manufacturing a transactional relationship, not a trusting one.

All I’m asking AI is for a little respect

A good example of an AI asking for trust rather than earning it can be found in the personal styling services that are popping up. They are doing some amazing things in integrating data science into every component of their business to drive growth. But my experience put me in the awkward position of feeling like I needed to explain myself and share sensitive personal details with someone I’d just met.

Signing up for the subscription service begins with a long and detailed survey about body type and style preferences. I spent a good 30 minutes thoughtfully answering myriad questions to create my style profile, and yet I returned everything that they sent me in the first box. None of it jived with my personal style, and only one thing fit.

I had put the time into creating a relationship with the AI—training it with my intimate information, such as how my body was shaped —and that process had given me a false sense of mutual understanding. If it had just taken a look at my Instagram profile and sent me a box, I would have given it permission to get it all wrong. But its surprisingly clunky and unnatural way of “sensing” my preferences for clothes set an unrealistically high expectation of what was to follow. Questions about slightly nuanced styles of plaid let me to believe that it could understand I like simple patterns—and then it sent me a shirt better fit for my grandfather.

By asking me to train a system before we had established a relationship of trust, the AI had breached it before it had even formed. When I’m in a trusting relationship, I can forgive someone for an error. But this felt different—like I’d been duped into a false sense of intimacy. I’m going to try one more box, but I have a strong hunch I’ll be cancelling my subscription soon.

Trust is earned, not machine learned

While rare, there are a few AI with whom I’m in a trusting relationship of mutual respect. I relish my moments of training them because they respect my time and provide me with value as I train them—even when the AI is mostly wrong. I’ve come to truly value those moments when they tell me I’m wrong, because they do so in a way that deeply respects my position as a human and their position as a machine.

Consider Spotify’s Discover Weekly feature. Spotify assembles a list of songs that it believes I might like based off what I’ve been listening to, what others are listening to, and what music journalists are writing about on the web. They put these recommendations into a playlist they present to me once a week. Everything about Discover Weekly invites my feedback. Even its name isn’t presumptuous: If Spotify had titled this feature “Your New Favorites,” I would have entered this relationship looking for reasons to prove it wrong. Instead, I’m always pleased when it unearths something I would have never discovered myself.

The format of the playlist is likewise brilliant: I don’t need to rate the songs but simply listen to them or skip them after I’ve heard a bit, and songs that I love get added to my own playlists or saved to my phone. Over time, it learns my preferences and can dish me up even better tunes. There’s no artificiality to training Spotify, and each and every time I train it, there’s value back to me as the user. Most importantly, Spotify has earned my trust through helping me discover a slew of new artists and, in doing so, earned its right to be wrong. (I don’t really love St. Vincent, but I understand why Spotify thinks I would.)

But we’re not the only one doing the teaching. When it comes to “training” humans, Waze and Google Maps are setting the standard. Their features are clearly based on a deep understanding of the common human emotional conditions that we endure during traffic: stress, anxiety, impatience, and a deep temptation to bail on our algorithmically recommended route in favor of our own “shortcuts” (which are almost never shorter).

Maybe the most brilliant bit of human-centered design inside of these navigation apps is how they highlight alternative routes and how much longer they’d take. Instead of having to blindly trust the route the AI has chosen, it considers its own possible fallibility and assures us by showing us the other routes we might be considering. Thinking of bypassing the highway? That’ll take another four minutes. Want to cut through that residential area? Here’s some construction of which you might not be aware.

In this way, Waze and Google Maps serve as a sort of angel on our shoulder, counseling us away from our inner traffic-hating demons. They’re not “AI-splaining” or chiding us for considering alternate routes, but rather giving us the necessary information to make the right decision. They’re rooted in a deep respect of human agency.

At IDEO, we prefer to think of AI as Augmented Intelligence rather than Artificial Intelligence. Taking a human-centered approach to building relationships between AI and humans compels us to meet humans on their terms, building relationships of trust and respect, and always remembering that intelligent systems must exist in service of humanity, not the other way around.

Beautiful, human-centered, human-AI relationships are about understanding human beings and our wonderful, weird intricacies and inconsistencies. They’re about designing the experience of AI around humanity, rather than the other way around. AI can only function well if it learns from its mistakes, and that means establishing trusting relationships with humans so they’ll feel comfortable saying when it’s wrong.

Justin Massa is a senior portfolio director with IDEO.