Mark Zuckerberg Is Rethinking Deepfakes

Facebook CEO Mark Zuckerberg

Facebook CEO Mark Zuckerberg Francois Mori/AP

In an interview, the Facebook CEO hinted that the company is trying a new approach to misleading videos created through artificial intelligence.

“Is it AI-manipulated media or manipulated media using AI that makes someone say something they didn’t say?” Zuckerberg asked. “I think that’s probably a pretty reasonable definition.”

It is also a noticeably narrow definition. For example, Facebook recently came under fire for its decision to leave up a Nancy Pelosi video that had been slowed down to make her appear drug-impaired or otherwise cognitively unsound. It didn’t use AI at all, but merely traditional (and quite basic) editing techniques.

While the Pelosi controversy was clearly in the background, Zuckerberg’s stated rationale for his definition was to prevent an explosion of takedowns that could result from too broad a definition.

“If [our deepfake definition] is any video that is cut in a way that someone thinks is misleading, well, I know a lot of people who have done TV interviews that have been cut in ways they didn’t like, that they thought changed the definition or meaning of what they were trying to say,” he said. “I think you want to make sure you are scoping this carefully enough that you’re not giving people the grounds or precedent to argue that things that they don’t like, or changed the meaning somewhat of what they said in an interview, get taken down.”

Which, if you consider the number of times that someone claims to have been misquoted or misrepresented by a journalist, is probably a legitimate fear.

Sunstein pushed for a broader definition of what kind of video Facebook should not allow and explicitly referenced the Pelosi video.

Zuckerberg described the problem with Facebook’s response as primarily one of “execution.” He said it took the company’s systems “more than a day” to flag the video as potentially misleading. Outside fact-checkers confirmed that in an hour, but over that day, it achieved large-scale distribution. Zuckerberg’s preferred vision would have been for the video to have stayed up but have been flagged immediately, thereby greatly limiting its distribution. “What we want to be doing is improving execution,” Zuckerberg said, “but I do not think we want to go so far toward saying a private company prevented you from saying something that it thinks is factually incorrect.”

That was in line with Zuckerberg’s other comments this afternoon, in which he repeatedly called for regulation to settle “fundamental trade-offs in values that I don’t think people want private companies to be making by themselves.”

Until the time that regulation comes, however, Zuckerberg said his company is working toward creating the best systems of governance it can. And he noted that Facebook is spending more money on content review and safety than the company’s revenue when it IPO’d. That suggests a spending rate of roughly a billion dollars a quarter.

And it was this spending, and the new (clearly still imperfect) infrastructure that it has created, that Zuckerberg used to defend his company from the renewed calls to break up Facebook. “On election integrity or content systems, we have an ability because we’re a successful company and large to be able to go build these systems that are unprecedented,” he said.

Not all problems seem to be solvable by scale, however. Earlier in the interview, when asked about foreign intervention in America’s elections, Zuckerberg reeled off a list of new Facebook policies, but then ultimately punted. “That’s above my pay grade,” he said.