Even trash talk from a robot throws people off their game, research finds.
The trash talk in the study was decidedly mild, with utterances like, “I have to say you are a terrible player,” and “Over the course of the game your playing has become confused.” Even so, people who played a game with the robot—a commercially available humanoid robot known as Pepper—performed worse when the robot discouraged them and better when the robot encouraged them.
Lead author Aaron M. Roth says some of the 40 study participants were technically sophisticated and fully understood that a machine was the source of their discomfort.
“One participant said, ‘I don’t like what the robot is saying, but that’s the way it was programmed so I can’t blame it,'” says Roth, who conducted the study while he was a master’s student in the Carnegie Mellon University Robotics Institute.
But the researchers found that, overall, human performance ebbed regardless of technical sophistication.
The study, presented last month at the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) in New Delhi, India, is a departure from typical human-robot interaction studies, which tend to focus on how humans and robots can best work together.
“This is one of the first studies of human-robot interaction in an environment where they are not cooperating,” says coauthor Fei Fang, an assistant professor in the Institute for Software Research.
It has enormous implications for a world where the number of robots and internet of things (IoT) devices with artificial intelligence capabilities is expected to grow exponentially. “We can expect home assistants to be cooperative,” she says, “but in situations such as online shopping, they may not have the same goals as we do.”
The study was an outgrowth of a student project in AI Methods for Social Good, a course that Fang teaches. The students wanted to explore the uses of game theory and bounded rationality in the context of robots, so they designed a study in which humans would compete against a robot in a game called “Guards and Treasures.”
A so-called Stackelberg game, researchers use it to study rationality. This is a typical game used to study defender-attacker interaction in research on security games, an area in which Fang has done extensive work.
Each participant played the game 35 times with the robot, while either soaking in encouraging words from the robot or getting their ears singed with dismissive remarks. Although the human players’ rationality improved as the number of games played increased, those who were criticized by the robot didn’t score as well as those who received praise.
It’s well established what other people say can affect an individual’s performance, but the study shows that humans also respond to what machines say, says Afsaneh Doryab, a systems scientist at Carnegie Mellon’s Human-Computer Interaction Institute (HCII) during the study and now an assistant professor in engineering systems and environment at the University of Virginia.
This machine’s ability to prompt responses could have implications for automated learning, mental health treatment, and even the use of robots as companions, she says.
Future work might focus on nonverbal expression between robot and humans, says Roth, now a PhD student at the University of Maryland. Fang suggests that more needs to be learned about how different types of machines—say, a humanoid robot as compared to a computer box—might invoke different responses in humans.
The National Science Foundation provided some support for this work.