This is Why the IARPA Director Doesn’t Worry About Self-Aware Artificial Intelligence

studiostoks/Shutterstock.com

Poorly engineered artificial intelligence is more likely to keep Jason Matheny up at night than a program becoming sentient.

With the current speed of technological advancement, poorly designed computer programs are a bigger existential threat than a highly intelligent, vengeful machine overlord, according to Jason Matheny, director of the Intelligence Advanced Research Projects Activity.

“We’re much less worried about ‘Terminator’ and Skynet scenarios than we are about digital ‘Flubber’ scenarios,” Matheny said during the Nextgov and Defense One’s Genius Machines 2018 event on March 7, referring to autonomous, technically intelligent blob with a penchant for creating chaos from the movie of the same name. “Really badly engineered systems that are vulnerable to either error or malicious attack from outside.”

Matheny noted IARPA’s ethical standards, which include some simple, yet revealing questions.

“We have a set of questions that guide the kinds of things we invest in or don’t invest in,” he explained. “Among them are questions like, ‘If we succeed in building this, would we regret it?’ and, ‘Under what conditions would we regret it?’”

More often than not, the most severe potential consequences come from system insecurity rather than the potential to spark singularity wars.

“Self-awareness as a property of systems—I guess you could say in some sense autonomous systems that have some representation of their own state are already self-aware in a very primitive sense—but I’m much less concerned about sentience and self-determination of autonomous systems than I am of various other kinds of failure modes.”

While Matheny noted the asymmetry of current cybersecurity—it is much, much easier to attack than defend—there are steps that can be taken to bring more parity to the fight, if not reverse course. He cited the Cyber Grand Challenge held by the Defense Advanced Research Projects Agency, which spurred innovations in automated defense.

“Pursuing tournament models like that is probably needed not only in cyber but also in AI,” he said. “One of the topics that we’ve become especially anxious about in the last couple of years is the brittleness, or the frailty of most AI approaches.”

He listed off a few simple hacks that could be used to fool or disrupt AI systems. (Catch the full rundown in the livestream recording.)

“Most of the commonly used machine learning systems are vulnerable to these kinds of attacks,” Matheny said. “So, I think the same kind of cynicism, or red team mentality that has become common in the cybersecurity community is now sorely needed in the machine-learning community.”