Social Media Content Regulation Requires G7 Consensus, Think Tank Says

Matt Cardy/Getty Images

The Information Technology and Innovation Foundation said a U.S.-led forum could develop “a set of voluntary, consensus-based guidelines” for social media platforms to use when moderating user-generated content.

The United States can improve global consensus around social media companies’ content moderation policies by working to establish an international forum for the Group of Seven—or G7—nations that is charged with crafting guidelines for moderating user-generated content, according to a report released on Tuesday by the Information Technology and Innovation Foundation.

ITIF’s report, “How to Address Political Speech on Social Media in the United States,” outlines a three-part plan for overcoming the politicization and polarization that have stymied the debate over how social media platforms should moderate misleading and harmful content. The report said the most important part of this multi-pronged approach is establishing an international forum “to develop a set of voluntary, consensus-based guidelines for social media companies to follow when moderating online political speech.” 

Rather than creating “one-size-fits-all recommendations about specific types of content that should be allowed or not allowed on all platforms,” the report said that this multistakeholder forum of G7 nations—which it referred to as the “International Forum on Content Moderation”—would focus on developing “content moderation processes social media platforms can use to address controversial content moderation questions and improve the legitimacy of their content moderation practices.” 

“These content moderation processes should respect transparency, accountability and due process and balance goals such as free speech with reducing consumer harm,” the report added. 

Recent incidents—such as rapper Ye’s suspension from Twitter and Instagram for antisemitic posts and PayPal’s reportedly aborted decision to impose a $2,500 fine on users who promote misinformation—have further underscored the political risks associated with content moderation moves. And the Supreme Court is also likely to weigh in on laws in Texas and Florida that restrict platforms’ ability to moderate user-generated content. 

This polarized climate, the report noted, puts social media companies in a precarious position when it comes to content moderation decisions.

“Policymakers have placed the onus on them to address complex content moderation questions, but then attack them when they do,” the report said. “When they remove controversial content, critics say they are eroding free speech; and when they allow that content to remain, critics say they are spreading misinformation and undermining democracy.”

ITIF’s report recommended that the U.S. take the lead in establishing an international forum by providing “initial funding in the form of a 10-year grant” that could “establish a new nonprofit organization or build a new capability in an existing multistakeholder institution.” This forum would “consist of individuals from industry, government and civil society,” with participants limited “to those who are from members of the G7 to ensure the initiative reflects shared democratic values.”

“The best course of action is not to shift the burden of resolving these issues entirely to industry or government but to bring together various stakeholders around solutions that foster trust, increase transparency and mitigate threats,” Ashley Johnson, a senior policy analyst at ITIF who co-authored the new report, said in a statement.

In addition to establishing an international forum for G7 nations, ITIF also recommended that the U.S. government increase support for social media companies working to mitigate harmful state-sponsored content, such as Chinese bots promoting propaganda and Russian dis- and misinformation campaigns. 

“Specifically, the U.S. government should provide additional research grants for academics studying these issues to partner with social media platforms to improve methods for identifying and responding to the most especially egregious forms of harmful or illegal content,” the report said. “In addition, the U.S. government should develop better information sharing capabilities, both between government and industry as well as between social media companies, about state-sponsored threats related to harmful content.”

Finally, the report said that Congress should intervene by passing legislation that sets “transparency requirements for content moderation decisions of social media platforms.” While the report noted that proposed reforms do not address the disparate content moderation concerns raised by Democrats and Republicans, it said that Congress should work to increase transparency around social media companies’ moderation decisions and “require social media platforms to clearly describe what content and behavior is allowed and not allowed, how they enforce these rules and how users can appeal moderation decisions.”

“Rather than continue down the path of blaming social media companies for problems they are unable to resolve on their own, or pursuing changes that would likely leave consumers worse off without addressing the real problems, policymakers should seize the opportunity to strike out on a new way forward that works in partnership with all stakeholders to develop consensus, trust and solutions,” the report concluded.