Critical Update: When Seeing No Longer Means Believing, What’s a Government to Do?

SpiffyJ/istockphoto.com

Agencies and lawmakers are moving to combat deepfakes and other synthetic media used as powerful tools in disinformation campaigns.

Deepfakes have a range of compelling applications in the modern communication and entertainment realms—but techniques underpinning them can also be repurposed for nefarious uses.

“We're aware in other countries that deepfakes have been used to try to change the result of elections,” the Government Accountability Office’s Director of Science and Technology Assessment Karen Howard explained in this episode of Nextgov’s Critical Update podcast. “We do know that they are most commonly used currently to exploit people, particularly women, by placing their faces into non-consensual pornographic videos.”

This form of synthetic media, which often presents people doing and saying things they did not actually do or say, rose in popularity over the last several years. The content is inherently manipulated by technology, and fake, but state actors and others are turning to it as a cheap and quick tool to undermine people’s sense of reality. 

Rapidly evolving technology makes it possible.

“A deepfake is a portmanteau of two words: deep learning, which is a branch of artificial intelligence, and fake. Essentially, when we talk about a deepfake, we mean a piece of synthetic media—and it can be a text, an image, or a video that is either manipulated or entirely created by artificial intelligence,” Matthew F. Ferraro, an attorney for WilmerHale and former intelligence officer said. “The definition that I walk around with in my head is that it's a very convincing media forgery created by computers, and how they're actually created.”

Deepfakes centered on public officials have gone notably viral more recently, catching the attention of America’s federal and state governments. 

Ferraro shed light on recent policy proposals, bills and interventions regarding this novel content. He and GAO’s Howard, as well as Matt Turek, the acting deputy director of the Defense Advanced Research Projects Agency’s Information Innovation Office, also discussed emerging and existing threats posed by deepfakes—and how some federal entities are responding. 

“From a broader U.S. government perspective, I think one of the challenges is just, you know, the structure and the openness of our society provide some additional attack vectors that more authoritarian societies don't have,” Turek said. “So we need to figure out how to build those defenses.”

Listen to the full episode below or download from Apple PodcastsGoogle Podcasts or your favorite platform.