Worried About Robots Taking Over? This Ethics Bot Might Put Your Mind at Ease.

A six-foot-tall, 300-pound Valkyrie robot is seen at University of Massachusetts-Lowell's robotics center in Lowell, Mass.

A six-foot-tall, 300-pound Valkyrie robot is seen at University of Massachusetts-Lowell's robotics center in Lowell, Mass. Elise Amendola/AP

The fear that our irresponsible creations might bring about the end of humanity is a common one.

Just how worried should we be about killer robots? To go by the opinions of a highly regarded group of scholars, including Stephen Hawking, Max Tegmark, Franz Wilczek and Stuart Russell, we should be wary of the prospect of artificial intelligence rebelling against its makers.

“One can imagine (AI) outsmarting financial markets, outinventing human researchers, outmanipulating human leaders, and developing weapons we cannot even understand,” Hawking wrote in a 2014 article for The Independent. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

The fear that our irresponsible creations might bring about the end of humanity is a common one.

In Goethe’s poem “The Sorcerer’s Apprentice,” an apprentice enchants a broom to do his work but then cannot control it. Jewish narratives tell the story of the golem, a powerful creature made of clay that was forged to serve the community but wound up threatening it.

These dystopian visions of the future cause little harm if they are merely fodder for cocktail parties and speculative essays. However, ethicists such as Wendell Wallach have suggested that the AI community needs to curb the pace of research and development until risks are properly assessed. Others, including Tesla founder Elon Musk, have recommended that public-policy makers exert control over AI projects.

But slowing down innovation in AI poses far more of a threat to us than killer robots. In fact, if we really care about the future of the human race, we need more AI projects, not less.

Coping with doomsday predictions

The problem with end-of-the-world predictions is that they are very difficult to disprove. Even if history offers no foundation for the basis of such fears, there is always the chance that things will turn out differently next time.

But it is possible to distinguish between concerns raised about a specific application of AI and concerns about AI as a whole. On March 9, 2016, two high-ranking UN experts issued a new report that includes a call to ban fully autonomous weapons. Previous UN reports issued in 2013 and 2014 called for a moratorium on these weapons.

Such concerns about the development of a very specific use of AI deserve close attention. There is incontestable current evidence that such weapons are currently being forged, so we know this is a real threat. And such arms do raise serious ethical questions, even if there is no reason to believe that weapons will attempt to overthrow the government.

In contrast, calls for the government to regulate the development of AI as a whole are far more eyebrow-raising. For instance, Yale University ethicist Wendell Wallach argues in his book "A Dangerous Master" that “The effective oversight of emerging technologies requires some combination of both hard regulations enforced by government agencies and expanded soft governance mechanisms.”

John Markoff of The New York Times reports in his "Machines of Loving Grace" that AI researchers are already exploring the idea of government oversight, following the model of biologists in 1997 who were concerned about some developments in their field. The biologists called for for self-regulation. The move resulted in the National Institutes of Health restricting its approval and support of a whole line of biological research, human cloning, for more than 10 years. He further reports that several AI researchers examined similar ideas for oversight of work in their field. 

But AI research and development is already carried out by so many different actors, both in academia and in the business sector, and in many countries around the world, that putting a lid on it seems highly impractical. Moreover, one must take into account the great benefits of AI.

For instance, by introducing measures that alert drivers when they are getting too close to other cars, AI is already saving tens of thousands of lives, and soon many more. AI is assisting doctors through the use of robotic surgery, and it helps pilots in many thousands of flights every day to reach their destination.

Indeed, we should ask ourselves why certain AI programs were not available when we badly needed them. When the reactors in Fukushima, Japan, started to melt down in the aftermath of the April 2011 earthquake and tsunami, the staff had to leave before they could shut down the reactors. Had an AI robot been in place at the time, it could have taken over and prevented the calamity that followed.

Let AI supervise itself

If we really want to keep AI from straying into nefarious territory, we need more of it to supervise the technology we already have. After all, AI may be autonomous, but it has no intentions or motivations of its own unless humans program those intentions in. So long as we ensure that programming for smart machines is subject to accountability and oversight, there is no reason to fear they will choose evil goals on their own.

We are now calling upon the AI community to develop a whole new slew of AI oversight programs that can hold accountable AI operations programs. This effort is known as AI Guardians.

AI operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience, and to be able to render at least semi-autonomous decisions. However, all operational systems need some boundaries—both in order to not violate the law and to heed ethical guidelines. Oversight here can be relatively flat and flexible, but not avoided.

This oversight system can help determine who or what was at fault when AI is involved in a situation that causes harm to humans—say, when a driverless car crashed into another. Was the crash attributable to the programmer’s mistakes or ill intent, or to decisions made by the autonomous AI operational system of the car?

AI enforcement mechanisms are also needed to ensure that AI operations systems adhere to legal and ethical guidelines—for example, avoiding any discrimination against minorities when it comes to how search engines display jobs, credit and housing information. 

One solution is ethics bots, which we need to inform the operational AI systems of the values that owners and operators want to honor. These bots can instruct cars whether they should drive at whatever speed the law allows or in ways that conserve fuel, or if they should stay in the slower lanes when children are in the car. They can also signal when it’s time to alert humans to a problem—such as, say, waking up a sleeping passenger if the car passes a traffic accident.

In short, there is no reason to introduce unnecessary and restrictive oversight into the AI world. However, there is plenty of room for guidance. The time has come for the industry to receive guidance that will ensure AI operational systems adhere to our legal and moral values—and that robots don’t come after us while we sleep.