Instead of Asking, 'Are Robots Becoming More Human?' We Need to Ask, 'Are Humans Becoming More Robotic?'

Engineer consultant Valerie Hawley, right, tries to shake hands with Pepper the robot of Softbank Robotics Europe.

Engineer consultant Valerie Hawley, right, tries to shake hands with Pepper the robot of Softbank Robotics Europe. Francois Mori/AP

A new kind of Turing Test

For more than 65 years, computer scientists have studied whether robots’ behavior could become indistinguishable from human intelligence. But while we’ve focused on machines, have we ignored changes to our own capabilities?

In a book due to be published next year, "Being Human in the 21st Century," a law professor and a philosopher argue we’ve overlooked the equally important, inverse question: Are humans becoming more like robots?

In 1950, computer scientist Alan Turing put forward what’s now known as the “Turing Test.” Essentially, Turing proposed a key test of machine thinking is whether someone asking the same questions to both a human and a robot could tell which is which. This has since become an important method to evaluate artificial intelligence, with regular Turing Test competitions to determine the extent of robots’ growing ability to mimic human behavior.

But Brett Frischmann, professor at Cardozo law school, and Evan Selinger, philosophy professor at Rochester Institute of Technology, argue we need an inverse Turing Test to determine to what extent humans are becoming indistinguishable from machines.

Frischmann, who has published a paper on the subject, says changes in technology and our environment are slowly, but surely, making humans more machine-like.

These may seem like small examples, says Frischmann, but taken together, they’re “meaningful.”

What does it mean to be human?

To test whether humans are becoming more machine-like, it’s important to define what makes us distinctively human. Philosophers have long considered this question, and often define human traits by comparing us to another category—typically, animals.

Frischmann and Selinger instead consider what distinguishes humans from machines. Several of these traits involve intelligence: common sense, rational thinking and irrational thinking are all intrinsically human. Frischmann points out that, as humans, our emotions sometimes make us behave irrationally.

“If we engineered an environment within which humans were always perfectly rational, then they’d behave like machines in a way we might be worried about,” he adds.

Another key category is autonomy and free will. The environment may influence our behavior, but it shouldn’t control it.

“I have some range of choice about how I can be an author of my own life,” Frischmann says.

Frischmann and Selinger blame “techno-social engineering” for a growing machine-like behavior among humans, which is another way of saying technology is changing our environment to make us behave in a more robotic way. Growing surveillance and “nudges” are slowly transforming the way we behave.

One seemingly innocuous example is electronic contracts: Those pages that ask you to click and agree to terms and conditions before proceeding with a download or update.

“You see a little button that says ‘click to agree’ and what do you do? You click. Because it’s a stimulus response,” says Frischmann. “It’s easy to dismiss those things. But the fact that every day, you and I and millions of other people routinely respond to a stimulus and click and go without understanding what we’re getting ourselves into, we are behaving like machines. We’re being, in a sense, conditioned or programmed to behave that way.”

Frishchmann also highlights Oral Roberts University in Oklahoma, which switched from asking students to keep a journal of physical activity to tracking their actions with Fitbit devices. This removed students’ ability to reflect on their own behavior and their freedom to exaggerate or lie if they so chose. Your ability to reflect on your experiences is a key aspect of being human, Frischmann says, as is the ability to think about how you relate that behavior to others.

“For us, the Fitbit example is more about the culture of surveillance and the culture of a series of technologies that are tracking not just your activity in one context, but in a variety of context,” he says. “Before long, you’re not really thinking about your own activity.”

Why is this happening?

Dehumanization can’t simply be blamed on the growing use of technology. Instead, Frischmann says our fetishization of technology is behind the trend. We’re overly trusting and reliant on technological developments, mindlessly assuming every new piece of tech must be beneficial.

The other key factor, he says, is our obsession with efficiency, which fuels the infatuation with new technologies.

“If we can be made happy, cheaply, then what could be better?,” he notes. “You don’t ask questions, you don’t resist. You want to minimize transaction costs. But sometimes, being human is costly.”

Maintaining personal relationships, in particular, is a costly but ultimately valuable aspect of being human.

“If we lose our ability to relate to each other along the way, because it’s efficient and cheap, we lose something of who we are.”

It’s entirely possible, Frischmann says, that it will be increasingly impossible to distinguish between humans and robots because of our machine-like behavior as much as robots’ human-like features. And could this eventually become the norm, with humans spending their entire lives acting like machines?

“I desperately hope we don’t get there,” he says. “I don’t think we’ll get there. But that’s kind of impossible to predict.”