Lawmakers Press Social Media Giants to Confront Deepfake Threats

Twinsterphoto/Shutterstock.com

Sens. Marco Rubio and Mark Warner want Facebook, YouTube, TikTok and others to create industry standards for handling synthetic content.

Two senators penned letters to Facebook, YouTube, TikTok and eight other social media companies Wednesday urging insiders to develop industry standards and internal policies to address increasing threats posed by deepfakes and fabricated online media.

“As concerning as deepfakes and other multimedia manipulation techniques are for the subjects whose actions are falsely portrayed, deepfakes pose an especially grave threat to the public’s trust in the information it consumes; particularly images, and video and audio recordings posted online,” Sens. Marco Rubio, R-Fla., and Mark Warner, D-Va., wrote. “If the public can no longer trust recorded events or images, it will have a corrosive impact on our democracy.”

Deepfake technology employs machine learning and artificial intelligence to produce synthetic audio and visual recordings that make people appear to say and do things that, in reality, they did not. The media manipulation has been used by malicious online actors to create fake pornographic videos targeting celebrities, and more recently, it’s also been used against a few of America’s major political figures. 

Part of the problem is that deepfakes can be incredibly difficult for the general public to spot—and it will only become more challenging as the technology advances. Last May, an online blogger doctored a video of House Speaker Nancy Pelosi, D-Calif., possibly to give the impression that she was intoxicated by slurring her words. It was viewed and shared by millions—including by President Donald Trump—before it was debunked

“The technology is widely available, becoming easier to use and more difficult to detect,” the senators wrote in their letters. 

Lawmakers have launched several recent efforts to confront and combat deepfake technology. They’ve called the leaders of social media giants to speak before Congress, for example, and also unveiled legislation to enhance research and public awareness, while also targeting generative adversarial networks, which is the technology that underpins deepfakes.

On top of testifying to Congress, some companies are also jumpstarting their own campaigns to address disinformation and the misuse of the tech. Last week Google’s CEO Sundar Pichai announced that the company is sharing a large dataset of visual deepfakes in an effort to support researchers working on synthetic video detection. 

“Detecting deepfakes is one of the most important challenges ahead of us,” Pichai tweeted.

But Rubio and Warner argue that, while the present efforts are important, they’re not nearly enough. 

“Despite numerous conversations, meetings, and public testimony acknowledging your responsibilities to the public, there has been limited progress in creating industry-wide standards on the pressing issue of deepfakes and synthetic media,” the senators wrote. “Having a clear strategy and policy in place for authenticating media, and slowing the pace at which disinformation spreads, can help blunt some of these risks.”

The lawmakers added that companies should produce policies and labeling that will support the public’s digital media literacy. It will also support researchers in tracking disinformation campaigns from foreign adversaries that implement the manipulated media in hopes to undermine America’s democracy, they said. 

Rubio and Warner also pose a series of seven questions for the companies to weigh in on. In particular, they ask about the social media entities’ technical ability to detect and archive misleading content, how they support victims who are targeted by deepfakes and whether they’ll transform algorithms to stop the rapid spread of fabricated content. 

“The threat of deepfakes is real, and only by dealing with it transparently can we hope to retain the public’s trust in the platforms it uses, and limit the widespread damage, disruption, and confusion that even one successful deepfake can have,” the senators wrote. 

Regarding the senators’ requests, Gary M. Shiffman, a former Homeland Security Department chief of staff who now teaches at Georgetown University, told Nextgov it’s important to keep in mind that right now, computer vision models are actually not that great at detecting manipulated media. For policies to really hold weight, he said computers need a sharper ability to identify anomalous patterns of behavior and to actually recognize deepfakes.

“As it stands, computer vision models can't detect deepfakes, or not well. So even if there is a policy in place, implementation will be limited by the ability to identify the deepfakes. That's difficult,” Shiffman said. “What is true is that policies in place to respond to deepfakes are reliant entirely on the ability to identify deepfakes. But we just don't have that ability yet."