Facebook Isn’t Developing a General AI, But Wrote a Paper About How it Would if it Were, Which it’s Not

Facebook CEO Mark Zuckerberg

Facebook CEO Mark Zuckerberg Eric Risberg/AP File Photo

Actual general AI is still a pipe dream.

For many working in artificial intelligence, the ultimate goal is a general AI: software that could reason its way through any problem, like a human but on a superhuman scale. Humans are only now realizing how to make an AI proficient at simple tasks, like identifying objects in photos, which means an actual general AI is still a pipe dream.

That hasn’t stopped a handful of researchers at Facebook from toying with how a general AI would work, and how we might measure progress in building one, if such a thing could be built. Their proposed system, dubbed CommAI, was first referenced in a 2015 paper titled “A Roadmap for Machine Intelligence.” CommaAI even comes with its own GitHub page.

The new paper, submitted for review at the International Conference on Learning Representations, begins with a rough framework for general AI. First, it needs to communicate natively in a human language—you say something, the machine understands it and can provide an intelligible answer.

“An AI will be useful to us only if we are able to communicate with it: assigning it tasks, understanding the information it returns, and teaching it new skills,” the Facebookers write. “Since natural language is by far the easiest way for us to communicate, we require our useful AI to be endowed with basic linguistic abilities.”

General AI also needs the ability to expand its vocabulary to learn more specific tasks, like answering questions about astronomy, or filing taxes. It’s easy to imagine a starter AI that can hold a basic conversation about current events or how it was built, but then—like a human!—gets specifically trained for certain industries.

The next item relies on that same idea of learning: The AI needs to know how to learn continuously. The Facebook team actually sees this as the most important part of the whole system, and believes any general-AI system should be benchmarked based on its ability to learn new concepts.

This makes sense: The entire pursuit in general artificial intelligence is flexibility—taking ideas learned in one scenario and applying them in another. To do this, the AI would also need general inputs and outputs, meaning it would have to be able to see and understand many different kinds of data.

“We do not claim that satisfying our [criteria] will lead to a full-fledged intelligent machine,” the researchers write, “but we see them as prerequisites to be able to efficiently teach it more advanced skills.”

A Facebook spokesman denied the company was seriously working on a general AI, noting that each of these researchers also work on lots of different projects.

But the race is already on: Google DeepMind’s stated goal is to “solve intelligence,” and nonprofit research groups like OpenAI are already looking for ways to counteract a potentially malicious general AI.

A general artificial intelligence would undoubtedly change the world, and give unprecedented power to whoever created it. It could automate nearly any job—it’s difficult to think of any vocation spared by a computer or robot that could reason like a human. And after that, what’s left for us to do—besides keep scrolling through Facebook?