The bipartisan legislation directs the National Science Foundation and National Institute of Standards and Technology to accelerate the detection of disruptive, manipulated media.
Legislation aimed at dedicating new research and technology to thwart increasingly prevalent manipulated media—now commonly known as deepfakes—passed the House Science, Space and Technology Committee with bipartisan support Wednesday.
Introduced last week by Reps. Anthony Gonzalez, R-Ohio, Jim Baird, R-Ind., Haley Stevens, D-Mich. and Katie Hill, D-Calif., the Identifying Outputs of Generative Adversarial Networks, or IOGAN Act, directs the National Science Foundation and the National Institute of Standards and Technology to study and accelerate the creation of technology that can detect the disruptive content.
“Deepfakes are not a new phenomenon … you will remember the famous scene where Forrest Gump was filmed shaking hands with historic presidents. At that time, the technique was revolutionary and very expensive and difficult to reproduce—only big Hollywood studios could afford to reproduce deepfakes with such images,” Gonzalez said at the bill’s markup. “Fast forward a few decades, and we now live in a world where advancements in technology and computing power has increased exponentially.”
Generative adversarial networks are essentially the technology that underpins deepfakes. When fed a bunch of images or media, GANs create a feedback loop to produce hyper-realistic, but manipulated content based off of what it’s accrued. Over the course of the last year, major figures of popular culture have increasingly fallen victim to deepfakes, which make them appear to say or do things that, in reality, they never said or did.
Washington was also rattled by the content last May, when an online blogger released a doctored video of House Speaker Nancy Pelosi, D-Calif. Pelosi appears to slur her words, as the video was potentially created to make the speaker sound and seem intoxicated. The manipulated media was viewed by millions, and shared widely across Facebook and Twitter—including by President Trump—before it was debunked.
The IOGAN Act requires NSF and NIST to supplement research on digital media forensic tools or comparable technologies to detect and constrain GANs and deepfakes, gain input from stakeholders and experts across the public, private and academic sectors, and submit a report on their findings and policy recommendations within the next year, among other mandates.
The bill also includes an approved amendment submitted by Rep. Jennifer Wexton, D-Va., to enhance the general public’s ability to detect the fake media. Wexton’s amendment directs NSF to also conduct research on how to support the public in discerning the authenticity of the vast content they are served.
“The underlying bill will help mitigate those problems, but in addition to developing new technologies, we need to teach Americans how to detect manipulated content that seeks to spread disinformation in the first place,” Wexton said during the markup. “This is a critical component to our national security deterrent strategy to combat disinformation campaigns, because the more education and awareness we have, the better we can strengthen and safeguard our democracy.”
The IOGAN Act marks the latest of several bills legislators are crafting at both the state and federal level to combat the threats posed by the spread of the falsified content. Matthew F. Ferraro, a senior associate at WilmerHale who specializes in issues around disinformation and deepfakes, told Nextgov that draft bills are not only moving quickly, but new legislation to tackle deepfakes seems to be dropping almost every month.
“These efforts are usually strikingly bipartisan, both in Congress and in statehouses. In Congress, the two most recent deepfake-related bills, the IOGAN Act and the Deepfake Reports Act, have strong bipartisan backing,” Ferraro said. “And states that have legislated in this area have done so with bipartisan support.”
On Wednesday, Ferraro published a nationwide survey on deepfake legislation just as the IOGAN Act was moving through the committee. It wasn’t included in his initial report, a sign of how fast deepfake legislation is moving. Ferraro noted that three states—California, Virginia and Texas—have already received bipartisan support around bills that ban all or certain kinds of deepfakes.
Regarding the IOGAN bill and Wexton’s amendment, Ferraro added that the congresswoman’s submission is “very much in line” with other deepfake legislation introduced in Congress.
“[The others have also] focused on directing executive branch agencies to conduct research and write reports on deepfake-related technologies and possible countermeasures,” he said. “I think Rep. Wexton's emphasis on raising public awareness is well-taken.”