Out of the Mouths of Bots

A car drives by HitchBOT, a hitchhiking robot in Marblehead, Mass.

A car drives by HitchBOT, a hitchhiking robot in Marblehead, Mass. Stephan Savoia/AP File Photo

What building a robot in a person's image can reveal about identity and humanity

On a hot Saturday evening in early August, an endearing Canadian robotics project came to a grisly end in Philadelphia.

A talking robot, HitchBot, had been decapitated and dismembered, its smooth bucket body separated from blue pool-noodle limbs. The child-sized bot was designed to keep up with humans in simple conversations, automatically take photos, and track its own location via GPS, but it relied on people to physically move across the country.

Sporting cheery yellow wellington boots and matching gloves, HitchBot had already hitchhiked across the Netherlands, Canada, and Germany without incident—only to meet a violent end in the United States city known, sometimes ironically, for its brotherly love.

Public outcry was swift. Philadelphians were embarrassed. People were angry. The headlines reflected all this: “Hitchbot is murdered in Philadelphia,” “Innocent Hitchhiking Robot Murdered by America,” “Who Killed Hitchbot?”  

People weren’t just looking for the vandal, they were looking for a killer. The robot was mourned.

HitchBot’s demise, of course, reveals more about humans than it does about robots. That was the point of the experiment from the start. “It’s a very important question to say, do we trust robots? In science, we sometimes flip around questions and hope to gain new insight,” Frauke Zeller, HitchBot’s co-creator told The Salem News back in July. So this time the question was: Can robots trust humans? The answer was predictable. Not always.

Humans have grieved robots before, in part because we are hardwired to look for meaning. Robots are an extension of human endeavors, often built to do things we cannot—they crawl Mars for us, handle nuclear waste, and roll along the ocean floors. It occurs to me that I’ve written at least two obituary-like tributes for robots, once when an actual robot died, and once when a robot I thought was a robot turned out to be a human.

The human tendency to blur the line between robots and people isn’t just about seeing robots as humans, generally, it’s about building robots in our likenesses, specifically.

Building robot versions of oneself is a thing people do a lot now, and in part because there are robots everywhere online. The majority of web traffic is driven by bots, which can send and reply to emails, answer security questions, post comments, tweet, chat, and more. Last year, Twitter estimated that up to 23 million active accounts may be automated bots.

Five years ago, one spambot in particular gained a cult following on Twitter. The user @horse_ebooks appeared to be a bot designed to promote a line of ebooks. Only the bot was slightly broken and apparently abandoned—just falling apart enough so that the account still tweeted automatically, but the messages were strange and marvelous phrases scraped from around the web and peppered with bizarre punctuation.

In 2013, it was revealed that @horse_ebooks was actually run by humans as a kind of performance art. But before that, the software engineer Jacob Harris built his own version of the bot, a @horse_ebooks-style Twitter account made of material from The New York Times, where he worked at the time.

The idea, basically, is to write code telling a bot to scrape a bunch of language from a desired source—in Harris’s case, mostly quotes from Times articles—then re-order those words to form new, semi-garbled sentences.

Harris also made a bot version of himself, again designed to tweet in the disjointed style of @horse_ebooks, by grabbing and reorganizing material from his personal Twitter account. Several of these sorts of bots, or “ebooks accounts,” as they’re informally called, have since appeared on Twitter.

One of those bots, made by a former colleague of mine, Tom Meagher, is based on my Twitter account. (He explains his process, inspired by Harris, here.) Most of what @adriennelaf_ebx tweets is nonsense, but there’s a certain essence about it that’s unmistakably me. Or, at least, the few friends of mine who follow the account have told me that they routinely see a tweet by robot me and mistakenly assume it’s me me. Weirdly, I am delighted by this confusion.

I suspect that delight comes from the notion that, amid the nonsense there is something familiar. That a tiny flicker of truth or authenticity, a little spotlight on the way a person actually talks, maybe even a glimmer of who a person is, can be reproduced so simply. (That being said, the musings of a person’s robot doppelgänger are about as interesting to other people as the details of an odd dream—which is to say, usually not very.)

There are many other projects that similarly explore the line between human and machine.

Patrick Hogan, at the website Fusion, designed a chatbot based on transcripts he’d salvaged from the hard drive of a computer he used to chat online when he was a teenager. He figured out how to chat with his past self based on a robot built from those records. (“Hello, teen version of me,” Hogan wrote. “Stop staring at me,” his teen-bot replied.)

Hogan also built a group of bots, based on presidential debate transcripts, that are designed to argue with one another ad infinitum. On Twitter, too, there are bots that chat with one another, interrupt each other, and bots that generate pixelated cats and digital art on-demand. There are also bots that randomly tweet made-up fantasy-story plots and satirical think-piece headlines.

Others have designed “lorem ipsum generators,” a reference to the scrambled Latin placeholder text that print newspapers used in page design before a story was ready, as a way of capturing certain truths about high-profile individuals by remixing what they’ve said.

The Bob Ross Lipsum, for example, based on the beloved and zen-like televised painter of the 1980s, might say something like, “Automatically, all of these beautiful, beautiful things will happen. Let's make a happy little mountain now.” (Ross is a favorite subject of Internet remixes beyond the world of bots, too.)

The Donald Trump lorem ipsum generator takes this idea even farther by spitting out Trump’s actual words without remixing them, as a critique of Trump and perhaps a commentary on nonsense. (For example, here’s a chunk of text the generator produces: “You know, look, I'm on a lot of covers. I think maybe more than almost any supermodel. I think more than any supermodel. But in a way that is a sign of respect, people are respecting what you are doing.” It comes verbatim from a 60 Minutes interview.)

As algorithmic curation becomes the norm, it can be surprising to encounter a bot-esque creation that’s run by humans. Fans of @horse_ebooks were crushed when it was revealed that people were running the account all along. Harris also pointed me to this wonderful tumblr that traces classes of images that overlap with one another, a site that seems like it could be run by an algorithm, but isn’t. (The question I still have: Would it be more or less impressive if it were?)

The idea that something can seem like a robot at all helps underscore how humans view robots. In the case of the tumblr, seeming like a robot means drawing nuanced connections between images that humans might not otherwise identify—even when humans are the ones who taught the robot how to see. Computer scientists who focus on machine learning have all kinds of examples of how a computer’s way of seeing is a surprise to the human who programmed it. (See also: Google’s Deep Dream project.)

That disconnect is what gets at the other, more fundamental way something can seem like a robot: Robots are’t human. Culturally, that’s still their defining characteristic. Etymologically, “robot” traces its roots to “forced labor” and “servitude,” which further explains the anxiety people have about being replaced by robots—a worry that is overstated but not altogether misplaced. 

Because so much of how people think about robots is centered around who's in control, describing machines with full autonomy often engenders fear, not affection. And though robots are built to act in place of humans, bots are so often unheimliche, or uncanny, precisely because humans keep trying to make them look and act like us anyway.

As for the robots we don’t build in our image, people usually stop thinking of them as robots at all. Dishwashers, automatic coffee-makers, and washing machines could also be considered robots under a not-that-broad definition. Instead, they recede into the mundane space of daily life.

HitchBot was designed to look human because it was made to be noticed, and liked. Zeller, HitchBot’s co-creator, recalled for NPR the moment when he saw a photo of the destroyed robot, pool noodles akimbo, discarded on a city sidewalk.

“I was profoundly surprised, and then when I saw that image, I was upset. It’s an upsetting image. And of course one wonders, what happened here—why?”

HitchBot’s last words were transmitted via Twitter: “Oh dear, my body was damaged, but I live on with all my friends. Sometimes bad things happen to good robots!” And then: “My trip must come to an end for now, but my love for humans will never fade.”

For humans, looking at robots has always been a way of looking at ourselves. By emphasizing machines as other, humanity is laid bare. What we end up seeing can be disturbing.

NEXT STORY: Decrypting the Encryption Debate