Online influence operations are often easy and cheap to set up with open source tooling, according to new research from Cisco Talos.
State-backed disinformation campaigns are likely to increase in the future because they are so easy to set up and supporting open-source tooling is so prevalent, according to a new research from Cisco Talos.
In a report published Aug. 26, the company examined the infrastructure and resources typically leveraged through a number of state-backed influence operations.
Nick Biasini, a threat researcher at Cisco Talos and one of the lead authors of the report, told FCW that the low barrier to entry and number of free, open source tools available can make it relatively easy for even non-technical groups to carry out information operations at scale.
That includes automation scripts to post and disseminate content, digital libraries that facilitate interaction with social media platforms, virtual private networks to obscure and avoid detection and software that tracks engagement metrics with users. Commercial software like NationBuilder, which is marketed to political campaigns as a way to centralize communication streams with voters, can also be used to track and manage a target audience for information operations.
What that really does is level the playing field," said Biasini. "Instead of having to have deep programing knowledge to write the kind of tools necessary to interact [with social media platforms], it's already done for you. With some limited modifications, you're off and running."
Foreign-backed disinformation campaigns have many features in common, the report said. Many are based around a provocateur, a central figure who sets strategic goals and organizes activities between independent and state-aligned groups to achieve a common goal.
The independent groups often masquerade or moonlight as legitimate marketing companies, such as the Tunisia-based UReputation, British Columbia's AggregateIQ or Israeli firm Archimedes Group. These companies sometimes have traces of discernable links to patron governments, but in general the idea is to maintain a degree of plausible separation. Other entities, like the Russian Internet Research Agency (IRA) or the group behind Secondary Infektion, tend to have more direct observable ties to their patron governments.
These agents often rely on payments or bots from the state that are funneled through a series of shell companies, like Concord Management and Consulting LLC, which was responsible for funding Russian disinformation activities by the IRA and others in 2016. Those resources can then be deployed on social media platforms, real and fake news websites, blogs and other mediums. From there the actors seed their content within targeted groups, use bots and allies to promote it and look for news hooks or cultural sparks that can set off a viral conflagration.
Matt Olney, director of threat intelligence and interdiction at Talos, said many state-level voting officials take a monitoring approach to the problem: keeping tabs on certain keywords, watching for bad information related to their state and preparing announcements that can get ahead of a piece of disinformation before it can successfully go viral.
While those activities can be part of a successful defense, researchers say there are a number of other actions they and local election administrators can take to be more proactive, including a sustained presence on social media to provide voters with an established, trusted source of information, pushing for verification from social media platforms and, if they're local, ensuring their government has a .gov address that makes it harder to spoof election websites and trick voters.
"One of the things you need to do to counter disinformation campaigns is to build the bridges to your voters before the [campaign] comes," Olney said. "So, provide concrete connections back to Secretary of State and county election offices."
Even if disinformation campaigns are easy to start, that doesn't mean they are automatically effective at pushing a certain message. Researchers still struggle to quantify the impact that disinformation can have on a target population. While Talos researchers concluded that such operations are likely to become more common in the future, they also told FCW that greater awareness by the public and media, mass takedowns of millions of inauthentic accounts across social media platforms and new policies put in place around disinformation and inauthentic activity has also provided Americans with far more tools of their own to combat the problem.
"If you look in comparison to 2016 and where we were then versus now, we are in great shape comparatively … we are far better suited today than we were four years to deal with this," said Olney.