Experts Recommend Expanding Agencies’ Authorities to Combat Online Deception

Georgejmclittle/Shutterstock.com

At Facebook’s first congressional testimony of the 2020s, experts said agencies’ jurisdiction should include the digital spaces.

Federal agencies could play more crucial roles in countering dangerous online deception targeting Americans across Facebook and other social media platforms, lawmakers heard Wednesday. 

At the House Energy and Commerce Consumer Protection Subcommittee hearing “Americans at Risk: Manipulation and Deception in the Digital Age,” experts detailed sobering cases of online deception that are impacting masses—and also offered creative approaches to how agencies could help prevent it going forward.

“You can’t just bring some new agency around and regulate all of the virtual world,” the Center for Humane Technology’s Executive Director Tristan Harris said. “Why don’t we take the existing infrastructure, the existing agencies who already have purview ... and have a digital update that expands their jurisdiction to just ask, ‘well, how do we protect the tech platforms in the same areas of jurisdiction?’”

Harris studied at the Stanford Persuasive Technology Lab with Instagram founders and said he therefore understands the culture of some people who build social media products and the ways that the services are at times designed “intentionally for mass deception.” In their testimonies, he and other expert panelists highlighted how the largely unregulated social media platforms are being used to connect billions, while also inadvertently causing detrimental effects or being weaponized to mislead many of those same users. Harris, for example, shared stark research around the implications for teens who use the platform. He explained that after nearly two decades in decline, “‘high depressive’” symptoms for 13- to 18-year-old teenage girls rose 170% between 2010 and 2017—and the increase was linked directly to their social media use. 

Research Director Joan Donovan, who leads the Technology and Social Change Project at Harvard’s Shorenstein Center, detailed how white supremacists and foreign adversaries employ social media platforms to sow racial divisions in communities across the nation, as well as how fraudsters have used President Trump's image, voice, name and logo to siphon millions from his supporters. Donovan also explained how deepfakes have come to present an “immediate identity threat” by realistically depicting public figures and others doing things they did not. She also highlighted the dangers posed by similar, but less reported threats: “cheap fakes,” which are clipped and altered from non-augmented footage. Donovan noted that platforms recently refused to take down a doctored cheap fake of presidential candidate Joe Biden. 

University of Nebraska’s Governance and Technology Center Director Justin Hurwitz also offered insight into another lesser-known threat: dark patterns. According to Hurwitz, “‘dark pattern’ is a new term for an old practice: using design to prompt desired—if not necessarily desirable—behavior.” One example might be when websites present terms of service or upgrade offers in windows that make it more difficult to cancel than to accept, he said. The professor repeatedly emphasized that dark patterns are something “the committee should be concerned about.” 

The hearing also marked Facebook’s first testimony on the hill in the new decade. The tech giant’s Vice President of Global Policy Management Monika Bickert outlined Facebook’s new deepfake policy to ban videos manipulated by artificial intelligence in ways that are not clear to the average viewer. Some lawmakers and experts argued it doesn’t go far enough. Bickert also highlighted some of the platform’s recent efforts to prevent the spread of deception on the platform. While the Facebook community encompasses more than 2 billion people, Bickert said the company took down more than 5 billion fake accounts in the first three quarters of 2019. She also said the company recently partnered with more than 50 entities around the world that are now fact-checking Facebook content across 40 languages every day. 

But experts argued that much more needs to be done to address the online misinformation epidemic—and they said that responsibility lies with the government and social platforms, not the internet-users themselves. 

“The tobacco industry doesn’t know which users are addicted to smoking. The alcohol industry doesn’t know exactly who is addicted to alcohol. But unlike that, each tech company does know exactly how many people are checking more than, you know, 100 times a day between certain ages, they know who is using it late at night,” Harris said. “And you can imagine using existing agencies—say the [Health and Human Services Department]—to be able to audit Facebook on a quarterly basis to say ‘hey tell us how many users are addicted between these ages and then what are you doing next quarter to make adjustments to reduce that number.’”

On top of HHS, Harris added that the Federal Communications Commission, Federal Trade Commission, Education Department, and National Institute of Health—”you can imagine every category of society,” Harris said—could also undergo digital updates through which they apply their existing jurisdiction in new ways to hold social platforms more accountable through audits and other actions. 

“That’s the only way I can see doing this absent building, scaling a whole new digital agency which would be, you know, way too late for these issues,” Harris said. 

Professors Donovan and Hurwitz also discussed how the FTC’s current authorities might be applied to tackle deception plaguing the virtual space. In terms of monitoring and preventing online scams and fraudulent behavior on social media, Donovan said lawmakers should better assess what data the FTC has access to and how the agency can make that information more actionable. Hurwitz also noted that the trade agency has a broad history of regulating unfair, deceptive and general advertising practices. FTC can use adjudication or enact rules to take action against any online or offline entity, he said. But Hurwitz also added that he’d like to see the agency do more, particularly related to rulemaking and in-court enforcement actions because the boundaries of its authority are “unknown, uncertain, untested.” He said it's an area where the agency could demonstrate the power it has by litigating on the relevant, present topic. 

“If we already have an agency that has power, let’s see what it’s capable of,” he said. 

Facebook’s policy lead did not weigh in on the platform’s own stance on how agencies could support it in curbing online manipulation and deception. However, toward the end of the hearing, Rep. Lisa Blunt Rochester, D-Del., said she would be in touch regarding Facebook’s position.

“I think we are having a lot of conversations here about freedom of speech, and also the role of government,” she said. “And so as a follow-up, I would like to have a conversation with [Bickert] about what you see as that role of government versus self-regulation and how we can make something happen here.”