To Improve AI, Scientists May Have to Make it Worse

Carlos Amarillo/Shutterstock.com

Machine learning today isn’t great at knowing when it should hold on to old information, and when those data have become outdated.

As annoying it is to forget something, it may be one of the qualities that makes you superior to robots.

It can be incredibly frustrating or embarrassing when a detail you need to remember, like a project deadline, slips your mind. But recently, neuroscientists have been toying with the idea that the some sorts of forgetting—the way we subconsciously choose to keep and discard information—may be a functional advantage. And developers are realizing it’s one of the hardest things about the human brain to recreate in artificial intelligence.

It’s not a bandwidth problem. Your brain could store memories from every moment in your life, says Blake Richards, a neuroscientist at the University of Toronto in Canada. (Some people do—these individuals have a condition called “hyperthymesia” and can remember every detail from their lives.) But, as Richards and his colleague Paul Frankland argue in a review paper published a review in the journal Neuron, ridding certain types of information from your brain is valuable for your functioning: it helps your brain make note of the most important things that happen to us.

“It’s useful to engage in some degree of forgetting,” says Richards. “But in order for that forgetting to be useful, you need to be forgetting the right type of information.” What way our brains keep only the most important information, and we can to parse it more quickly and learn from it.

Think about everything that happens to you over the course of a day: the thoughts you had when you woke up, the details of your morning routine, where you sat on your commute. These are called episodic memories, and over the course of a lifetime, the vast majority of the details they contain won’t matter.

What do matter, though, are the atypical, important instances in your life—like when you first met your future spouse, or the day you started a new job. Our brains can recognize that these are significant moments. We hold onto these important memories for longer, can easily recall them, and can relate them to new memories we make. Limiting the information we keep readily available means that the important stuff is more easily accessible. Richards gives the example of remembering a phone number from an old friend: If you no longer need to call your friend (or, more likely, you stored his number into your phone), keeping that one phone number may just be unnecessary noise when you’re trying to remember a new phone number that’s more important right now. Forgetting information that’s no longer useful may streamline the recall process.

Understanding how our brains decide what’s important and what’s worth forgetting has implications for creating better AI. Currently, AI has a problem with something called “catastrophic forgetting.” As Dave Gershgorn, Quartz’ artificial intelligence reporter explains, this is when AI learns all it can about one subject, but then is placed in an entirely new setting. In order to adapt to the new setting, it throws out all of the old stuff—even when some of it may still be useful in the new context.

Machine learning today isn’t great at knowing when it should hold on to old information, and when those data have become outdated. Programmers are experimenting with algorithms that teach machines to learn when to keep and connect old information to newer experiences, and when to let them go.

For example, developers at DeepMind, Alphabet’s machine-learning research branch, published work early this year showing the results of adding an algorithm to AI that allowed it to learn to play different Atari video games sequentially. This technique allows the AI to make more connections between the information it takes in so that it can essentially learn from it over time.

“Our algorithm works by slowing down learning along the dimensions that are important to previous tasks,” says James Kirkpatrick, a DeepMind computer scientist and the lead author of the paper. “We did not let the system memorize particular pieces of data, which would be akin to episodic memory. Rather, we let the system learn from the data.”

It’s similar to the way the human brain works: making connections to some episodic data that is important, but not memorizing every detail of every day. That said, it’s still a limited algorithm. “At some point the network would reach its full capacity and no new knowledge could be learned,” says Kirkpatrick. So although it’s a step towards a future where machines could learn which information to store and discard, it doesn’t quite get the AI to a point where it can actively forget over time to continuously learn like we do.

Though to be fair, neuroscientists don’t really know exactly how our brains do it, either. The next challenge for both neuroscientists and AI developers will be to learn how our brains triage incoming information into keep, connect, and trash bins, so to speak. Illuminating this process is especially difficult because it happens automatically in our head all the time. When we notice forgetting, it’s when our brain slips up up and forgets something important. “We have a bias to notice those moments when our forgetting is not well done,” Richards says. Most of the time, we don’t notice when our brain has gotten rid of insignificant details, because never needed them anyway.