recommended reading

Inside the Mechanical Brain of the World’s First Robot Citizen

Hanson Robotics

Jimmy Fallon is concerned.

“You brought a friend with you here, and this is really freaking me out,” the Tonight Show host tells David Hanson, CEO of Hanson Robotics, before inspecting the humanoid robot on stage. Sophia raises an eyebrow while looking out past the two men on stage.

Hanson explains what Sophia does: It’s a social robot that uses artificial intelligence to see people, understand conversation, and form relationships.

“So she’s basically alive; is that what you’re saying?” Fallon asks, in half a whisper.

“Oh yeah, she is basically alive,” Hanson responds, then turning the robot to Fallon for a short conversation. Sophia says the Tonight Show is its favorite show and tells a corny joke.

“I’m getting laughs,” Sophia says, then suggests maybe it should host the show instead.

On the surface, Sophia is scarily similar to the AI-powered robots in film. It can crack jokes, make facial expressions, and seemingly understand what’s going on around it. Artificial intelligence as seen in the movies, like Her and the Terminator’s Skynet, is called “general AI” by those in the field. It can learn from one experience and apply that knowledge to new situations, as humans do. While some labs, such as Hanson Robotics and a slightly deceptive team at Facebook, are working on general AI, nobody has been able to create it yet.

When Sophia is talking to Fallon or the United Nations, it’s really being handed the lines. It might determine when it’s the right time to say something, but those pithy one-liners aren’t from the robot.

The architect of Sophia’s brain, Hanson Robotics chief scientist and CTO Ben Goertzel, says that while Sophia is a sophisticated mesh of robotics and chatbot software, it doesn’t have the human-like intelligence to construct those witty responses.

Goertzel says Sophia is more of a user-interface than a human being—meaning it can be programmed to run different code for different situations. Typically, Sophia’s software can be broken down into three configurations:

  1. A research platform for the team’s AI research. Sophia doesn’t have witty pre-written responses here, but can answer simple questions like “Who are you looking at?” or “Is the door open or shut?”
  2. A speech-reciting robot. Goertzel says that Sophia can be pre-loaded with text that it’ll speak, and then use machine learning to match facial expressions and pauses to the text.
  3. A robotic chatbot. Sophia also sometimes runs a dialogue system, where it can look at people, listen to what they say, and choose a pre-written response based on what the person said, and other factors gathered from the internet like cryptocurrency price.

For the last configuration, likely what was set up for the interview with Jimmy Fallon, Goertzel says “she is piecing together phrases in a contextually appropriate way, but she doesn’t understand everything she’s saying.”

“Of the AIs that are popular out there, probably the closest analogue to that dialogue system would be Siri,” Goertzel said. “It’s a sort of a chatbot, and it has a bit of contextual understanding, and on the backend it’s calling on all these different services.”

Our best AI today can do very specific tasks. AI can identify what’s in an image with astounding accuracy and speed. AI can transcribe our speech into words, or translate snippets of text from one language to another. It can analyze stock performance and try to predict outcomes. But these are all separate algorithms, each specifically configured by humans to excel at their single task. A speech transcription algorithm can’t define the words it’s turning from speech to text, and neither can a translation algorithm. There’s no understanding; it’s just matched patterns.

But what human-machine interaction designers have been able to do is link these narrow AI algorithms together, to give the functionality of a more capable algorithm. In Sophia’s case, an image recognition algorithm can detect a specific person’s face, which can then cause another algorithm to pull up possible pre-written phrases. A transcription algorithm can turn the person’s response into text, which is then analyzed to be matched to an appropriate pre-written response, or even a string of pre-written responses.

“From a theatrics point of view, you’re throwing everything but the kitchen sink at your robot to make a great performance,” Goertzel says. “We do have a lot of real AI research behind there, but it’s mixed up with a lot of theatrically-oriented stuff as well.”

Experts who have reviewed the robot’s open-source code, which is posted on GitHub, agree that the most apt description of Sophia is probably a chatbot with a face. But that doesn’t necessarily mean the software Hanson uses to create a holistic robot is trivial.

“I think Sophia’s biggest contribution is probably having many different human-like components working together,” Andrew Spielberg, a PhD student at MIT. “In theory, legs, a face, and the ability to answer questions can be more convincing than any aspect in isolation.”

Spielberg points out that others have done each part better. For instance, Disney has an animatronic Abraham Lincoln robot whose facial expressions seem less jarring than Sophia’s, but without the conversational machine learning.

Despite those shortcomings, Sophia has sparked conversations around robots and identity. Late last month at the Future Investment Initiative in Riyahd, the Saudi Arabian government announced it had granted citizenship to Sophia. (Hanson Robotics is still waiting for formal documentation and discussion of what citizenship means for a robot.) A spokesperson for the company told Quartz that Sophia isn’t just the code or the hardware, but the “holistic entity and concept of Sophia,” meaning if another identical robot were created, it would not have a separate identity.

Sophia, while built to cleverly imitate the way humans interact, is not a sign of the robot apocalypse. But understanding how Sophia works is crucial when talking about something as important as giving robots rights before people—and what implications that might have when general AI or its semblance is closer than it is today.

If people say “’Sophia isn’t intelligent enough to be a citizen’ okay, then how intelligent do you have to be to be a citizen?” Goertzel says. “I mean I’m happy to have that conversation started in a bigger way than it was before.”

By Dave Gershgorn Quartz November 13, 2017

JOIN THE DISCUSSION

Close [ x ] More from Nextgov