How Do We Stop Artificial Intelligence from Overpowering Humans?

Tatiana Shepeleva/Shutterstock.com

Legislation could help protect humankind before AI dominates the world, experts say.

OK, maybe it is time to start worrying about robots taking over the world. 

Artificial intelligence could pose a serious threat to the human race, but only if we allow that to actually happen, according to Stuart Russell, electrical engineering and computer sciences professor at UC Berkeley. 

"At the moment, there is not nearly enough work on making sure it isn't a threat to the human race,” Russell said, speaking Tuesday at an Information Technology and Innovation Foundation event in Washington, D.C.

Over the past year, a handful of leading field experts, such as Elon Musk and Bill Gates, have expressed concern over the possibility of AI overpowering the human race. But if such a scenario were to play out, the blame would be on humans themselves for creating the technology and not putting in place the appropriate policy to keep it from taking over the world, according to Russell.

“The main thrust of the field is toward building systems that have raw, general-purpose human level of superhuman intelligence,” he said.

Russell compared it to climate change and nuclear weapons, which humans both created and  suffer from. But if someone had legislated in the late 19th century to curb the use of the internal combustion engine, or in the mid-20th century to restrict the materials needed for an atomic bomb, the situation today could be drastically different, he said.

But just because a few individuals recognize the harrowing effects of a piece of new technology, there will always be skeptics to get in their way, Manuela Veloso, a computer science professor at Carnegie Mellon University, argued during the event.

“Even if we would have made policy for a little group of people that would abide by that policy and would be enforced on that policy, how do we stop the whole world?” Veloso asked.

Progress toward building superhuman artificial intelligence will and should continue, she said. The best policymakers can do to protect humans against AI is to create legislation to combat a problem once it occurs, she said.

ITIF Vice President Daniel Castro, who moderated the event, showed the videogame “Breakout” on the screen, in which a player must bounce a ball back into the top of the screen before it touches the ground. But instead of a human armed with the controls, the computer had been instructed to maximize the score.

After two hours, it had successfully mastered the game. And four hours later, it had advanced beyond its instructions and discovered a better way to win the game.

“Where are we right now?” Castro asked. “Are we still with computers figuring out how to use the panels and what happens when they not only get to the expert level, but they move beyond us?”

(Image via Tatiana Shepeleva/Shutterstock.com)