Why Robots Won't Take Over the World Anytime Soon

Mopic/Shutterstock.com

Efforts to develop artificial intelligence continue apace, with major computer science research and development facilities devoting time, energy, and money to making computers behave like humans.

Just this week, the world’s most famous living physicist, Stephen Hawking, laid out his worries about artificial intelligence: “The development of full artificial intelligence could spell the end of the human race,” he told the BBC. In October, Elon Musk delivered much the same message, warning that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”

Yet efforts to develop artificial intelligence (AI) continue apace, with all the major (and even many minor) computer science research and development facilities devoting time, energy, and money to making computers behave like humans. Some of them are succeeding: Machines can now understand humans, speak with them, learn from them, and write like them. They will make some jobs obsolete and others easier. But they aren’t—yet—out to get us.

The Siri generation

The first mainstream consumer application of a machine that can comfortably interact with humans in something close to natural language came from Apple in the form of the Siri personal assistant. Since then, Microsoft has launched Cortana and Google also gave users the option to talk to their phones, albeit without an anthropomorphic personality.

But Siri and its peers aren’t really AI. Though natural language processing—the ability for a machine to engage in conversation—has made great leaps in the past half decade, it remains limited, says Charles Ortiz, who works on AI at Nuance, the company responsible for Siri’s voice recognition technology.

For example, you can ask Siri to find restaurants in a particular neighborhood. But Siri cannot engage in more complex interactions that involve multiple data points, such as, “I’d like a reservation for a restaurant in Chelsea, preferably Italian but Chinese will do, for three people at around 7pm—oh, and I’d like one with valet parking and a children’s menu.” That sort of thing requires multiple interactions or several Google searches. “The real goal is to try to humanize this very expansive technological landscape that we’re all faced with everyday,” says Ortiz.

Amelia goes to work

While a machine that humans can casually chat with may be a while away, a more basic, text-based version for business is already live. IPsoft, a New York-based technology firm, has developed what it calls a “cognitive knowledge worker” named Amelia, and will work with Accenture to pilot the software with two big oil companies, Baker Hughes and Shell.

Amelia’s anthropomorphic avatar is a pleasant, blonde woman with neat hair and a somewhat intense gaze. She (well, it) has the comprehension of a six-year-old and an emotional spectrum with which to react to the tone of her human interlocutor, says IPsoft’s Martijn Gribnau. If a human is becoming irritable, Amelia adjusts her tone accordingly. And if she can’t solve a problem, she calls in a human operator—and then watches and learns from that interaction.

When a human types a message to Amelia, the software breaks the message down into its component parts. As the conversation progresses, Amelia is able to relate prior questions to the current ones, informing her answers. While this may not sound like a particularly big deal for a six-year-old, it is an impressive feat for a machine.

As part of its pilot, Baker Hughes initially will use Amelia to deal with outside vendors—queries about invoices and that sort of thing. At Shell, Amelia will help staff develop new courses for internal training, says Cyrille Bataller of Accenture.

Personalized prose

Where Nuance and IPsoft want to understand natural language, Yseop(pronounced easy-op), a French company with offices in the US, makes software that approaches natural language from the other side. Instead of trying to understand it, Yseop’s business is to write it. Other firms such as Narrative Science, whose robots write for Forbes, and Automated Insights, which works with the Associated Press, make software that can generate reports from data. Yseop, however, is able to produce personalized reports and recommendations based on dynamic input from everyday users, says John Rauscher, the company’s CEO.

Websites such as vetonline.com and L’Oreal use the software to ask customers questions and generate plain English suggestions for users in much the same way that a doctor or a hairstylist might. The software relies on an AI mechanism called an inference engine.

Rauscher showed me demonstrations of other uses of the software, such as drawing up quick, easy-to-read summaries from a customer relationship management database. The underlying notion is that data is no longer hard to find or retain, but combing through mountains of information to interpret it can often be a challenging task for humans. If a machine can summarize lots of information in plain English, that makes it easier for humans to act upon it.

Only the beginning

Amelia and Yseop’s software are rudimentary forms of artificial intelligence; neither is likely to be plotting world domination. A bigger worry is the more quotidian one of jobs.

Call-center workers, whether dealing with invoice queries or offering advice on pet health, adhere to tight scripts and processes—individual staff are given limited autonomy, if any. Machines are well-suited to such roles. (Indeed, repetitive jobs have always been at risk from automation: In the 15th century, Venice installed a clocktower topped with two bronze figures that would strike a bell every hour.)

But in other knowledge-intensive professions, AI will augment, rather than replace humans, IPsoft’s Gribnau argues. Sales executives can get quick summaries about their clients rather than having to wade through spreadsheets. Doctors, lawyers, financial services workers, and others who need to be familiar with vast amounts of information can query assistants such as Amelia to easily pull up specific information about regulations, ailments, or past interactions. Emergency wards can automate triage for low-risk patients by using Yseop’s software to ask basic questions and spit out reports.

These are the benign, even desirable types of AI. Yseop’s software cannot think for itself; Amelia is not autonomous. Both need human input to be of any use. The terrifying “full” intelligence that Musk and Hawking worry about remains out of reach—for now. But in the meanwhile, perhaps AI can help humans avoid the equally scary prospects of having to chase invoices and stay on hold to call centers in perpetuity.

(Image via Mopic/Shutterstock.com)