CISA, DARPA Offer Look Into their Dealings with Deepfakes

Mihai Surdu/Shutterstock.com

Agency and industry officials outlined their approaches to disinformation campaigns ahead of election 2020.

Agency and industry officials this week detailed their efforts to improve public resiliency, streamline communication and accelerate technical solutions to counter the threats posed by deepfakes and other disinformation techniques ahead of next year’s election. 

“Essentially, if you generalize a bit, these are attacks on knowledge, right, which underpins everything that we do,” Matt Turek, program manager for the Defense Advanced Research Projects Agency’s Information Innovation Office said on a panel at the Center for Strategic and International Studies Wednesday. “It underpins our trust in institutions and organizations.”

Turek runs a media forensics program inside DARPA that leverages technology to automatically assess if images and video have been manipulated. More specifically, he said the program also provides a quantitative score on the contents’ integrity, which enables researchers to “do a triage of data at scale.” Though the program launched in 2016, Turek said that deepfaking as it was originally understood—as a specific automated manipulation technique for swapping faces in video—initiated in 2017, when source code for creating the media was released on the social media site, Reddit. And over the last year, politicians and major figures of popular culture have increasingly fallen victim to deepfakes that falsely reveal them doing or saying things that they actually did not.

“And now, that term has been broadly adopted to essentially be any automated, or somewhat automated, manipulation technique primarily in video—but the term in broad use is starting to apply to other media types as well,” Turek said. 

McAfee’s Senior Vice President and Chief Technology Officer Steve Grobman added that while the weaponization of the manipulated media feels new, historical archives from at least World War II offer many examples of images being altered to make people believe things that were not true. 

“When we think about the problem of deepfakes in 2019, it’s really about lowering the barrier to entry to building misinformation for the use of information warfare,” Grobman said. “What’s really different with the advances in artificial intelligence is you can now build a very compelling deepfake without being an expert in technology or data science.”

He explained that there are now websites where individuals can pay small sums of money for systems to generate custom deepfakes on demand. So, the problem is no longer just that there are budding types of misinformation content, but also that different classes of users are now able to create that content and subsequently distribute massive quantities of misinformation across many modern platforms. 

Therefore, Grobman said its critical for public and private stakeholders to educate “the public as we move into the election cycle, to be skeptical and to see where things actually originate from—and on technical side, it is about building out the detection capabilities so that we can win the cat and mouse game against the creation of deepfakes.” 

And agencies said they are already embracing those activities. 

“When it comes to things like disinformation and deepfakes, we’re sort of laser-focused on the elections aspect of that,” the Cybersecurity and Infrastructure Security Agency’s Assistant Director for Cybersecurity Jeanette Manfra said. “There’s a lot of other pieces where deepfakes can be used in ways that would be harmful.”

Though the FBI has the overall lead on counter- and foreign-influence, Manfra said CISA is working fervently to “build a more resilient public,” and broaden the trust that people have in the government, so that they respond more actively and immediately when they notice what could be harmful, manipulated media. 

“We need to make sure that we have a public that is educated and understands that there are people out there that are going to try to do these things, and so you know, we think the best solution for this is as much sunlight as possible,” Manfra said. “When people know about something, just get it out there, whether it’s to the specific affected individual, or if it’s something that could have broader impacts—just get it out to the public.”

In that effort, CISA is partnering with news organizations to improve their understanding of where to go for authoritative information when false and manipulated information is disseminated, or adversaries launch false content to divide communities. 

“There’s a lot of ways in which our critical functions are dependent on the public’s confidence that they are going to work,” Manfra said. “And it’s good that we have that confidence, but also as we’ve learned, it can be easily manipulated. So this is, I think, a really big challenge for not just the government, but also as a society to really think about.”

DARPA’s Turek also emphasized that disinformation campaigns are not limited to the political sector—they also target the realms of finance, commerce, insurance and the scientific process. He said the agency’s researchers are also working directly with the Health and Human Services Department to address allegations around scientific fraud. The team is now creating technological tools that can rip apart scientific publications and understand if images included have been manipulated. 

“So, there is really broad-based opportunities for these sort of manipulation tools, and again, it’s not just sort of elections, military decisions and politics,” he said. “But I think they can essentially touch us in our everyday lives.”