Is it Unethical to Design Robots to Resemble Humans?

chuckchee/Shutterstock.com

The more we humanize chatbots, virtual assistants and machines, the more we in turn display human emotions toward them.

Three men deliver an endless assault of kicks as the victim lies motionless on the grass. With venom in their eyes, one of the perpetrators delivers a crushing blow with a wooden bat. Another gets down on his knees and delivers a flurry of fists.

The printer is dead. Plastic parts and microchips are strewn across the ground.

So goes the scene in Mike Judge’s cult classic film "Office Space," which is a cathartic release from the constant indignities of the modern worker. The printer is a source of chagrin for its regular paper-jam notifications and its inability to properly communicate with its human users. There is no trigger to feel compassion toward this inanimate object: It is only a machine, made of plastic and filled with microchips and wires. When the printer met its demise, the audience felt only joy.

But what if this brutal assault had been on a human-looking machine that had cried out to its attackers for mercy? If instead of a benign-looking printer, it was given a name and human characteristics? Would we still mindlessly attack it? Would we feel differently?

As technology progresses from inanimate objects governed by numbers to human-looking machines controlled with conversations, it raises questions as to the compassion owed to artificial intelligence—and each other.

Learning to Love Bots

Human compassion is generally reserved for the living, and follows a sliding scale that increases with how much awareness we believe a being possesses. But in 2015, Japanese researchers found neurophysiological evidence that humans feel empathy for robots appearing to be in pain: Brain scans indicate we have an automatic visceral empathic reaction with both humans and objects that look like humans.

And so the more we humanize chatbots, virtual assistants and machines, the more we in turn display human emotions toward them. This is the process of anthropomorphism, whereby inanimate objects are attributed with human emotions, traits and intentions.

When something appears alive, it is in our nature to view it through a human lens. Now that many AIs and conversational bots have the illusion of being self-aware, they therefore trigger emotional responses in their users as if they were human. If the despised printer in "Office Space" had resembled a human (or a living animal, for that matter), our feelings toward both the object and the violent perpetrators would be altered. That’s why many people cringe when they see Boston Dynamics’ robotic dog getting kicked.

If we decide to treat human-looking objects merely as objects, would that eventually lead to us objectifying humans, too? It is unclear how human compassion, or the lack thereof, toward AI will affect our IRL relationships. But there is the worrisome possibility our freedom to treat human-seeming tech in any manner we like (all the way from being rude to physical abuse and sex-without-will) may transfer to normalizing those actions in our human relationships. In other words, society’s push toward humanizing AI could have the unintended consequence of the dehumanization of actual humans.

The Gaming of Human Compassion

Human compassion can be gamed. It is the ultimate psychological hack; a glitch in human response that can be exploited in an attempt to make a sticky product. That’s why designers give AIs human characteristics in the first place: They want us to like them.

Take “companion robots” for example, which are being used to provide emotional support for the elderly. These robots are built specifically to read and respond to human emotions and provide artificial empathy. Likewise, the relationship we have with the virtual assistants in our phones and Amazon Echos is intended to be personified so we will want to interact with them more.

The rapid push toward imbuing our electronics and online experiences with qualities that blur the line between human and nonhuman interactions raises two fundamental questions:

  1. How should society treat realistic AI? For example, is it OK to stab a realistic-looking robot? As an object without awareness, the stabbing would seemingly be a victimless display of abhorrent behavior. But while the action causes no direct impact on another person, its very existence may cross a moral threshold of wrongness. There’s a concern that exerting extreme dominance over realistic AI is may lead to the abuse of humans and animals.
  2. How realistic should companies make AI objects, experiences and chatbots? Advancing AI and virtual reality will bring issues regarding the visceral impact of realistic experiences to the forefront. This will open the door to some very uncomfortable topics. Should murder be outlawed in virtual reality? Should companies be allowed to create underage-looking realistic sex dolls? “The problem that we face is that we have no idea whether allowing the use of such robots would be an outlet for certain behaviors that ends up reducing actual incidences of child abuse, or whether it’s something that perpetuates and normalizes such behavior,” said MIT Media Lab robot ethicist Kate Darling at the We Robot 2017 at Yale Law School in March 2017. “We have no clue.”

The two questions regarding the treatment and the development of realistic AI and chatbots need to be considered in tandem. It behooves society to establish not only the norms of behavior toward realistic robots and virtual assistants, but also to develop better guidelines and expectations for the businesses involved in their development.

The bottom line is, we should push for more responsible AI, where deep thought and analysis is given to emerging tech. Realistic AI, chatbots and humanoid companions will not only disrupt our conception of compassion, but have the potential to adversely impact our human-to-human relationships. It would serve us well to strongly consider how we interact with the fantasy worlds these companies create. While we may not always have control over our emotional responses to tech, we can control what we develop.