We Should Raise AIs Like Parents, Not Programmers—Or They’ll Turn Into Terrible Toddlers

whiteMocca/Shutterstock.com

At its core, creating a safe AI is not that different than raising a decent human.

A few years ago, when my son was barely three, he confused me for a slow hard drive. As I was explaining a new concept to him, I stumbled. While I searched my brain for the right words, he looked up at me: “Mama, it’s loading.”

We are surely on a path to faster downloads. We just need to make sure we are loading the right stuff.

The world is entering a new phase of evolution, one where we’ll be powered by the technology we build. One where artificial intelligence will both augment and dwarf our primate brains. One where our own survival will depend on ensuring that AI helps us evolve into better beings, instead of birthing the doomsday that Stephen Hawking and Elon Musk warn us about.

It will not be enough to have an AI that makes effective decisions. As we move toward artificial super-intelligence—a machine intelligence that surpasses human intellectual capability—we need to teach AI to decide right from wrong.

Sound familiar? That’s because we must think of building AI the same way we teach our children.

At its core, creating a safe AI is not that different than raising a decent human. Our parenting helps shape luminaries and tyrants, geniuses and dictators. But unlike how a single human parenting failure creates a problem child, the impact of failing to parent AI correctly will be much more profound. AI is widely connected to systems that manage all aspects of life: from energy and telecommunications to autonomous machinery, medical care, financial, and legal controls. The sheer number of decisions AIs already make—from guiding our movements to approving mortgages and evaluating medical treatments—is astronomical. So when our AI grows up, it has the potential to have devastating effects far beyond the impact of any one rogue human.

Right now, AI is still very much a toddler. More accurately, it is many toddlers, because there are many AIs. But unlike a toddler, it has much more data to learn from, an ever-expanding processor, and an ability to be in many places at once… and no parent at home. In fact, there isn’t an entity today ready to guide AI through its terrible twos.

If building AI requires an even greater diligence than raising a child, what can we do now to safeguard AI’s development at its very foundation? We can apply some important lessons we teach to young humans to how we govern AI:

  1. Keep an open mind
  2. Be fair
  3. Be kind

Keep an open mind: the peril of bias

When I tell my now-teenage son “show me your friends, and I will tell you who you are,” or explain that “you are the average of the five people you spend the most time with,” I’m trying to teach him how highly susceptible our brains are to influence. Exposure to ideas and experiences change how we think, how we act, and ultimately who we are. In fact, neuroscientists show that these experiences rewire the internal neural networks of our brains, especially during the childhood years.

This is also true of AI.

Like human brains, machine-learning algorithms assess how to act based on past experiences: They create decision pathways based on the data they have seen. If the data they’re exposed to is limited, their understanding of the real-time information they process will be, too.

We have seen this in Google’s facial-recognition technology, which learned to recognize white people as humans and black people as gorillas because dark-skinned human faces were not included during learning. AI makes invisible human biases much more apparent. This lack of diversity is at the root of bias.

One way to minimize or remove bias in both AIs and children is to ensure that they are exposed to sufficient diversity in what they observe. If they’re not, AI will assume a narrow position when exposed to homogenous data, and kids will grow up to be adults who can only understand one point of view. Tay, a chatbot created by Microsoft, is a good example: It began to repeat racist remarks it heard on Twitter only 24 hours after being introduced to the world. Like a teenager who is influenced by her group of friends, the chatbot became biased by the opinions it heard.

An open mind relies on access to a variety of evidence and data. Yet the businesses that advance AI often hide this data behind firewalls. Open minds need open information, so to improve diversity, learning datasets need to be shared broadly.

Be fair: cracking the “black box”

Our ability to trust is underpinned by fairness. It is so essential that children as young as four will detect and react to unfairness. But in order to verify fairness, one must have access to the decisions that are being made.

In The Hitchhiker’s Guide to the Galaxy, the supercomputer Deep Thought produced the number 42 as the Answer to the Ultimate Question of Life. In US courts today, an AI-based system likewise produces a single number that dictates humans’ fate. This number is used to assess risk associated with the length of a defendant’s sentence, which the judge can then use to consider in their judgement. In a recent case, Wisconsin v. Loomis, the judge gave a defendant a particularly long sentence based on this number, but the defendant’s lawyer was not allowed to understand and interrogate the reasoning behind the AI’s decision. Just like Deep Thought, the court system could not explain its reasoning. The answer was produced in what’s referred to as a “black box”: an electronic system completely closed to analysis or inspection.

This dismissive “because I said so” approach does not build a sense of fairness or trust in either AI or children. If we refuse kids an explanation as to why we make the decisions we do, we drive them to rebel. We even teach them to “explain their work” in school rather than just provide the final answer. We must find a way to explain the reasoning—the algorithm—behind the decision to avoid rejection and unfair judgement so we can both inspect and interrogate the decision’s logic. AI needs to not only produce, but also explain the answer it creates.

AI cannot yet explain itself, but it is learning. DARPA has launched the explainable AI program as one option for opening up the black box, and this year the European Union will begin enforcing a law requiring that any decision made by a machine is explainable. Requiring AI peer-reviews or similar validation techniques would mean the algorithms that affect so many lives are subject to sufficient scrutiny. If we can demand our human children to explain their actions, we can surely ask this of our artificial creations.

Be kind: forging the empathy chip

Understanding the process of decision-making is necessary but not alone sufficient—sometimes we need to improve the rules that we follow in the first place. This requires two key traits: empathy and imagination.

The human ability to extrapolate allows us to reason and make decisions in new and unfamiliar circumstances, even in situations we have never experienced before. We teach our children to “put yourself in someone else’s shoes” to understand what is happening objectively—to empathize. We want them to imagine being in a new situation, or even being in someone else’s.

AI needs to learn the same skills. It is getting better at understanding human emotionApple acquired a company in this space back in 2016 and many others are working on similar technologies. But reading emotions is just the beginning: Understanding the outcomes for all involved is the next step.

We teach kids Newton’s third law as part of their introduction to physics: Every action creates an equal and opposite reaction. Children learn an equivalent of this in the social world: Actions create consequences. But learning consequences from one’s own experience is a slow process. AI has a capability to explore many outcomes simultaneously through simulation—a process of virtually playing out sequences of actions in order to predict their outcomes—which is a machine’s version of imagination.

This high-speed simulation allows AI to pick the move in a game of chess that will have the highest probability of winning the game. AI can also play out a much greater range of eventualities than a human in much more complex systems: It can simulate consequences of a tax cut on consumer confidence or a particle-count increase on children’s asthma rates.

Simulation may not only help find the edges of acceptable behavior—it might mitigate the doomsday scenarios by mapping out these eventualities, and preparing for them. Applying empathy—an ability to understand the effects of these eventualities—to an AI simulation is the final critical step in understanding and also improving the systems we create. It is the frontier of self-reflection, of “walking a mile in my shoes” that helps us realize the limits and flaws of our own thinking.

* * *

AI is rapidly evolving into a reflection of humanity. Making a machine indistinguishable from a human in a two-way conversation has been the objective of the Turing Test for over half a century. While difficult to pass, this test is no longer sufficient for our times. We need a super-Turing test that reflects humanity as we want it to be when it grows up: not just human, but one that is kind, fair, and has an open mind.

An African proverb tells us that “it takes a village to raise a child.” We need an electronic village that can provide a diversity of influences, transparent governance, and shared empathy to raise AIs that can pass the super-Turing test. AIs that augment us should not simply make us perform better, but should make us a better humankind.