Trust Me, I’m An Artificial Intelligence From Washington

Willyam Bradberry/Shutterstock.com

A lot of the old black and white television shows and movies I watch—though some of them are through “Mystery Science Theater 3000,” so probably not the highest quality programs—feature government agents operating far outside of D.C. What normally happens is that they arrive in some small town during an incident and the people there are naturally suspicions of them, so they flash their badges and say something like “Trust us, we’re from Washington, and we’re here to help.” Thereupon the locals immediately drop their opposition and attentively listen as the government agents explain exactly how the town is going to be saved from the alien invasion, Russian spies, pending natural disaster or any number of other unusual or fantastic occurrences.

These days, the old “Trust us, we’re with the government,” slogan is probably more of a meme than a way to ensure instant cooperation. People tend to question everything, and perhaps rightfully so. And when it’s not even a person, but a government-created artificial intelligence asking for their trust, it adds yet another potential barrier to the relationship.

That could be a big problem, especially for the military, as it strives to create the so-called third generation of AI. The third wave AIs are supposed to be able to react to changing situations and think about the best solutions, supposedly faster and more accurately than a human. It’s one of the main goals of the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program, which is in turn part of a massive $2 billion investment the Defense Advanced Research Projects Agency is making in artificial intelligence. Called AI Next, that program may eventually field over 60 projects aimed at improving AI smartness and reliability.   

But none of that will do much good if people simply ignore the advice of their machine helpers or even rebel against them. While attitudes could change over time, for now, humans overall don’t seem willing to place their trust in AI, even when given assurances that doing so would be in their best interest. Just because someone says, “You can trust this AI. It was made in Washington and it’s here to help,” that doesn’t make people believe it.

That is what the military is finding out with recent trials of human and AI partnerships. Researchers from the Army Research Laboratories working with the University of California, Santa Barbara, set up a game to test how well people could put their trust in a government AI. The results did not bode well for future human and AI partnerships.

The game that humans and AI played was a variation of the classic Prisoner’s Dilemma game invented by the RAND Corporation in 1950. The game involves two prisoners, neither of which can communicate with one another. Police have enough evidence to put both prisoners away for a short while for a minor crime but lack evidence connecting them to a major crime they committed together. Both prisoners are given a deal. The dilemma is that if they both stay quiet, then they are each sentenced to a little time in jail but are not charged with the bigger crime. If they both talk, then they both go away for a long time for the more serious crime. And if one talks but not the other, then the snitch gets off scot-free while the quiet prisoner is punished for the major crime.

To make matters more complicated, and thus bring in the need for the artificial intelligence helper, participants played an iterated game. In that version, the same players participate in multiple exercises and are thus able to punish their “partner” for previously selling them out, or to reward one another for playing well and keeping quiet. Iterated games are surprisingly complex, with a lot of math and probability calculations needed to make an optimal choice each time.

Participants were given the rules of the game before taking on the role of a prisoner and then told that the AI would help them play and that it was designed to provide them with the best course of action. But they had the choice about whether to take the AI’s advice or not. More often than not, they didn’t listen. In fact, quite a few players, especially those who said they were familiar with the concept of the game, either didn’t use their AI partner at all or just ignored everything it said. Only people who had never played the game consistently leaned on the AI for advice. The so-called experts mostly treated it with disdain.

“This might be a harmless outcome if these players were really doing better—but they were, in fact, performing significantly worse than their humbler peers, who reported knowing less about the game beforehand," ARL scientist Dr. James Schaffer wrote in the after-action report.

The other interesting revelation from the Army experiment was that people who originally said they were more likely to trust an AI before the game actually used it less than others who were unsure of what they would do. That might mean that humans harbor a hidden bias against AI which they may not even realize. Or, it could mean that while they are comfortable asking the AI provided by their smartphones for advice about which deli has the best pastrami sandwich in the area, when it comes to something serious, like how many simulated years they are going to be spending in prison, they would rather not leave their fate in the hands of a machine.

The Prisoner’s Dilemma game might not have been a model of human and AI partnership, but it does shine the light on an interesting point that DARPA may not have previously considered. Building a better, smarter AI is great, but they also need to find a way to make it trustworthy or even likable. People probably haven’t fully trusted those “agents from Washington who are there to help” since the 1950s. It will take much more than a simple introduction to get them to put their faith in a machine when lives are on the line.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys