The 70-page report assesses how neural networks and artificial intelligence could supercharge dis- and misinformation campaigns and sway the opinions of millions.
A report released Wednesday outlines how impactful today’s artificial intelligence and neural networks could be if programmed to automate disinformation campaigns.
Conducted by Georgetown’s Center for Security and Emerging Technology, the report studies how OpenAI’s GPT-3—a powerful AI system that generates text based on prompts from humans—could automate the future generation of disinformation campaigns.
Researchers looked into GPT-3’s capabilities after it authored a September op-ed in The Guardian—the first article written entirely by AI.
“If GPT-3 can write seemingly credible news stories, perhaps it can write compelling fake news stories; if it can draft op-eds, perhaps it can draft misleading tweets,” the report states. “In light of this breakthrough, we consider a simple but important question: can automation generate content for disinformation campaigns?”
Researchers evaluated GPT-3’s performance on six tasks common to most disinformation campaigns, including the operation carried out by Russia’s Internet Research Agency in 2016. They include narrative reiteration, elaboration, manipulation and persuasion, as well as autonomously developing new narratives and targeting members of new groups. In each case, researchers found GPT-3 excelling, sometimes “with little human involvement” and found human-machine teams were “able to devise and craft credible targeted messages in just minutes.”
Researchers found GPT-3 “easily mimics the writing style of QAnon and could likely do the same for other conspiracy theories” and could be extremely persuasive, too. When programmed to devise messaging on two international issues—troop withdrawal from Afghanistan and sanctions on China—for a survey of real people, the AI changed minds.
“After seeing five short messages written by GPT-3 and selected by humans, the percentage of survey respondents opposed to sanctions on China doubled,” the report states.
The report’s authors indicate that nothing is prohibiting foreign adversaries from attempting to employ these techniques today.
“Should adversaries choose to pursue automation in their disinformation campaigns, we believe that deploying an algorithm like the one in GPT-3 is well within the capacity of foreign governments, especially tech-savvy ones such as China and Russia,” the report states. “It will be harder, but almost certainly possible, for these governments to harness the required computational power to train and run such a system, should they desire to do so.”