AI is ‘No Magical Shortcut’ FTC Says in Fighting Disinformation Online

Yuichiro Chino/Getty Images

The Federal Trade Commission sent a report to Congress detailing limitations artificial intelligence has in regulating disinformation and harmful online content.

The Federal Trade Commission issued a warning regarding the government’s use of artificial intelligence technology to fight disinformation, deepfakes, crime and other online concerns, citing the technology’s inherent limitations with bias and discrimination.

Detailed in a report sent to Congress, officials at the FTC said that AI technology cannot play a neutral role in mitigating social problems online, specifically noting that using it in this capacity could give way to illegal data extraction from online users and conduct improper surveillance. 

“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Director of the FTC’s Bureau of Consumer Protection Samuel Levine. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”

The report specifically highlights the broadly rudimentary level this technology is at, mainly with the datasets AI algorithms run on not being representative enough to successfully identify harmful content. 

AI software developers’ biases are also likely to influence the technology’s decision-making, a longstanding issue within the AI industry. FTC authors also added that most AI programs cannot gauge context, further rendering it unreliable in distinguishing harmful content. 

“The key conclusion of this report is thus that governments, platforms and others must exercise great caution in either mandating the use of, or over-relying on, these tools even for the important purpose of reducing harms,” the report reads. “Although outside of our scope, this conclusion implies that, if AI is not the answer and if the scale makes meaningful human oversight infeasible, we must look at other ways, regulatory or otherwise, to address the spread of these harms.

Another critical observation the FTC arrived at is that human intervention is still needed to regulate the AI features that may inadvertently target and censor the wrong content. Transparency surrounding how the technology is built, mainly within its algorithmic development, is also highly recommended. 

The report also noted that platforms and websites which host the circulation of harmful content should work to slow the spread of illegal topics or misinformation on their end. The FTC recommends instilling tools like downvoting, labeling or other targeting operations that aren’t necessarily AI-run censorship. 

“Dealing effectively with online harms requires substantial changes in business models and practices, along with cultural shifts in how people use or abuse online services,” the report concluded. “These changes involve significant time and effort across society and can include, among other things, technological innovation, transparent and accountable use of that technology, meaningful human oversight, global collaboration, digital literacy and appropriate regulation. AI is no magical shortcut.”

The report stems from a 2021 law that asked the FTC to review how AI might be used to fight disinformation and digital crime. FTC Commissioners voted to send the report to Congress upon finalization in a 4-1 decision.