The proposal would create a Department of Homeland Security-led task force to pinpoint technologies useful to trace content back to inception.
Senate Homeland Security and Governmental Affairs Committee leaders moved to form a new federal task force to explore setting standards and deploying technologies for determining facts about the origins of digital content.
That cadre—the National Deepfake and Digital Provenance Task Force—would draw insights from across the public, private and academic landscapes and operate within the Homeland Security Department, according to legislation introduced by ranking member Rob Portman, R-Ohio, and Chairman Gary Peters, D-Mich., on Thursday.
It’s meant to help chart a path forward for how DHS and other feds can work to counter the online spread of maliciously-made synthetic media.
Former U.S. diplomat Mounir Ibrahim told Nextgov Monday that this marks Congress’ first piece of legislation to explicitly hone in on digital content provenance, or the verifiable chronology of the inception and history of images, videos, documents, recordings or other electronic media. After years serving as a foreign service officer for the State Department, he’s now vice president of strategic initiatives for Truepic, a technology company specializing in image authenticity.
Ibrahim explained that while many people base personal, financial, political and other vital decisions on what they see and hear online, they’re also facing “an explosion in the proliferation of image deception, fraud and fabrication tools readily available on any smartphone or computer.”
“The most advanced of these image deception techniques are known as deepfakes, or wholly fabricated synthetic videos, which are already very, very realistic—but are still improving at a rapid rate,” he said.
Such videos use emerging technologies to make people appear to do or say things that they didn’t in reality. Bad actors have weaponized standard image deception methods through cheapfakes, which can be manipulated with cheaper and more accessible software than machine learning, for a variety of illicit purposes. Experts, Ibrahim noted, are also seeing advanced image deception via the more sophisticated, AI-enabled deepfakes, like those “used in illegal non-consensual pornography, which is very damaging.” Such weaponization could also be tapped for illicit purposes across government, business and society. The FBI warned several months ago that the methods are “almost certain” for corporate espionage and business fraud.
But to Ibrahim, “perhaps worse than the fraud itself is the second-order effect of the erosion of trust online”—a concept known as the liar's dividend. The idea is that as cheapfakes and deepfakes proliferate, they’ll increasingly undermine the trust in anything humans encounter online, even if it is true.
“One example of this is the few people who suggested the video of George Floyd's murder was a deepfake. Though that was not widely accepted, that is a snapshot of how the liar's dividend can be weaponized,” Ibrahim said. “In short, the erosion of trust will turn into the erosion of our shared sense of reality.”
To confront that threat, the lawmakers’ 14-page legislation outlines their proposals for the makeup and responsibilities of the fresh DHS task force.
The strategic group would be co-chaired by DHS and Office of Science and Technology Policy officials and include 12 members equally representing the government, private and academic sectors. Each of those selected would have technical expertise in artificial intelligence, media manipulation, cryptography, digital forensics or other relevant fields. They would consult the Energy, Defense and State secretaries, National Institute of Standards and Technology and National Science Foundation directors, among other agency leaders, over the course of their work.
Broadly, the ultimate intent of the task force would be to map out a coordinated plan for investigating how a digital content provenance standard could assist with reducing the dissemination of deepfakes, help advance tools for content creators to authenticate their media and its origins, and improve how the public and private sectors relay trust and information about digital content sources to the public.
“This commonsense bipartisan bill will help strengthen our nation’s ability to combat malicious attempts to spread lies and further divide the American people,” Peters said.
Ibrahim pointed out that this legislation comes not only as image-based deception is advancing rapidly—but also builds on a notable recommendation from the National Security Commission on AI’s comprehensive review. Specifically, the group called for the making of a new task force to consider standards for using technology to certify content authenticity and provenance. The bill also emerges as the Coalition for Content Provenance and Authenticity is building an open standard for widespread adoption across the internet. Truepic, Intel, Adobe and others participate in the coalition.
“This is the most direct and informed legislation I have seen associated with digital content provenance,” Ibrahim said. “However, we have seen other nations move towards ensuring there is transparency and information on image fabrication available to content consumers.”
Norway passed a law last month mandating social media influencers to disclose what alterations are made to digital content. The approach was also referenced in Australia's mis- and disinformation code of practice. In the U.S., the legislation follows Portman’s Deepfake Report Act, which passed the Senate last year as a provision in the 2021 National Defense Authorization Act.
“I would expect to see the approach [to provenance] begin to be understood and included in additional legislation in the US and abroad in the coming year or two,” Ibrahim said.