Actors linked to adversarial nations — namely China and Russia — worked across platforms to push inaccurate content, according to a report released Tuesday.
Social media conglomerate Meta identified thousands of accounts and profiles across multiple sites that were deemed to belong to “the largest known cross-platform covert influence operation in the world,” wherein cyber actors from adversarial nations targeted online users in the U.S. and ally countries.
Coordinated inauthentic behavior documented in Meta’s newly-released second quarter Adversarial Threat Report is attributed to actors in Turkey, Iran, China and Russia. These campaigns targeted platforms including Facebook, Instagram, X, LinkedIn, Pinterest and YouTube in a bid to promote content to a growing audience, often by posing as artificial news sources.
Of the nations discovered to have engaged in deceptive posting activity, actors with ties to China and Russia collectively reached audiences on over 50 different apps and across countries. Within these campaigns, Chinese actors promulgated commentary about the Chinese government, its province Xinjiang and critiques of the U.S. government. Some of the critiques specifically focused on American journalists and researchers.
Russian actors engaged in similar behaviors but focused their content on mimicking European news outlets with the intent to hurt international support for Ukraine.
American news outlets that fell prey to Russian actors’ spoofing attempts included Fox News and The Washington Post, according to the report.
“Rarely, if ever, do today’s online threats target one single technology platform — instead, they follow people across the internet,” a Meta press release stated.
Given the sprawling digital reach of China’s operation, Meta’s report offered a more in-depth analysis of coordinated inauthentic behavior from Chinese-linked digital actors. The report said that their network, dubbed “Spamouflage,” operated primarily on Facebook and Instagram, underpinned by presumably fake engagement farms operating out of Vietnam, Bangladesh and Brazil.
Activity in these networks included manufactured comments from fake profiles affiliated with the Spamouflage campaign and repeated posts of articles pushing its agenda across a bevy of online platforms, lending the campaign “a considerable degree of resilience.”
Despite the digital reach and perceived effort, Meta researchers note these campaigns were unsuccessful in impacting authentic online users.
“Despite the very large number of accounts and platforms it used, Spamouflage consistently
struggled to reach beyond its own (fake) echo chamber,” the report says. “Many comments on Spamouflage posts that we have observed came from other Spamouflage accounts trying to make it look like they were more popular than they were.”
But Sen. Mark Warner, D-Va., expressed concern about the impact such influence campaigns could have leading up to an election.
“While I’m glad to see Meta continue its public research on adversarial activity, I have concerns that across the industry we’re seeing disinvestment and deprioritization of platform integrity work," Warner told Nextgov/FCW. "Ahead of an upcoming presidential election, it is essential that Meta, YouTube, Twitter, Reddit and other platforms increase their efforts, particularly as new and more sophisticated forms of abuse are opened up by generative AI models.”
Meta’s report comes as U.S. intelligence and security agencies continue to warn against foreign hacking on U.S. digital networks and focus domestic research and innovation on countering Chinese advancements amid escalating geopolitical tensions.