Facebook, Twitter Detail Efforts to Stop Fake Accounts from Targeting Veterans

Rawpixel.com/Shutterstock.com

Featured eBooks

Digital First
Cloud Smarter
Cybersecurity & the Road Ahead

Lawmakers pressed social media companies on whether they’re devoting enough of their resources to stopping romance scams and disinformation. 

Facebook and Twitter representatives described their strategic efforts to combat exploitation specifically targeting veterans online—yet the increasingly sophisticated digital disinformation threats show no signs of abating, lawmakers learned at a hearing on the hill Wednesday.

“These operations are rapidly evolving,” said the science director of the social network analysis company Graphika, Vlad Barash. “Early campaigns we observed and analyzed targeted individuals online at random, using easily discoverable methods; newer methods target specific communities, embed sock-puppet personas in them, and use sophisticated ‘cyborg’ approaches that synergize large-scale automated operations with precisely crafted disinformation injection and hijacking efforts by human operators.”

As an influential community in America’s social fabric both on and off the internet, veterans have become a popular target for online manipulation. Bots, trolls, bad actors and beyond are seizing opportunities and vulnerabilities to exploit them over the internet, with the intent of sowing division and spreading disinformation. The hearing is part of a broader investigation launched by the House Veterans Affairs Committee in March to address the online veteran-targeting efforts being weaponized to inspire fear and spark national confusion. 

“The goal of these operations is not simply to ‘go viral,’ or to have a ‘high Nielsen Score,’ so to speak, but rather to influence the beliefs and narratives of influential members of key communities active at the wellsprings of social and political ideas,” Barash said.

Also on the panel was Kristofer Goldsmith, a veteran who served in Iraq and has been tracking trolls and foreign adversaries targeting the veteran service organization Vietnam Veterans of America. VVA recently gave him the title of chief investigator out of necessity, he said, upon realizing they were “facing a series of foreign-born online imposters who were creating social media accounts and websites that were meant to trick our members and supporters.” 

In September, Goldsmith published the results of a two-year investigation that documents “persistent, pervasive, and coordinated online targeting of American service members, veterans, and their families by foreign entities who seek to disrupt American democracy.” In producing the report, Goldsmith said he has acted as a sort of “unpaid consultant” for Facebook and Twitter, companies that he’s had a “great relationship” with since releasing the probe. The veteran believes that the challenges America is facing around disinformation will require “a whole of society response,” and though he noted it’s right to assign blame and guilt, he said people should come out of the hearing considering the companies as “American assets and victims.” 

“Basically what it comes down to is we are asking for them to be the police force and they don't have any sort of enforcement mechanism,” Goldsmith said. “If they can’t do anything that brings the pain to the human being sitting behind the anonymous avatar, there’s no real incentive for that person, for that human being, that bad actor, to stop what they are doing.”

Both Twitter’s Public Policy Manager Kevin Kane and Facebook’s Head of Security Policy Nathaniel Gleicher broadly highlighted some of their companies’ efforts to support veterans and combat disinformation campaigns and other attempts to undermine the social networking services they provide. They each noted that their companies are hiring veterans, leveraging advanced technological and reporting tools to enable more rapid responses to misinformation, producing transparency reports regarding their findings, and working with law enforcement when necessary. 

Kane noted that Twitter conducted a comprehensive review of potential service manipulation in the last election, and in 2018, the company subsequently reported that there were more than 50,000 malicious, automated accounts that were Russian-linked and tweeting election-related content. He noted they made up only about 1% of all election-related tweets during the period. Kane added that the company also provides a publicly accessible archive on foreign, state-backed online influence operations, which contains more than 30 million tweets from accounts engaging in disinformation campaigns that are based in Russia, Iran and China, among others. 

“On the issue of platform manipulation, we have made significant progress in our work. In fact, since January of 2018, we have challenged more than 520 million accounts engaging in platform manipulation,” Kane said. “To be clear, we define platform manipulation as disrupting  the public conversation by engaging in bulk, aggressive, or deceptive activity.”

The platform also released a number of updates to its policies around scam tactics and synthetic media, and also recently announced plans to stop all political advertising on Twitter globally. At Facebook, Gleicher—who previously prosecuted cyber crime at the Justice Department—said insiders are enforcing their own policies through a mix of human review and automated detection technologies. He emphasized that 35,000 employees across the country are working on safety and security, which is more than three times the amount they had dedicated to this work in 2017. The company’s security budget, he said, is greater than the entire revenue of the company when it went public. Gleicher added that Facebook took down over 2 billion fake accounts in the first quarter of 2019 alone. 

“We know that we face motivated adversaries in this space and that we have to continually improve our approach to stay ahead,” Gleicher said.  

Still, both representatives also diverted offering specific details in response to some of the critical questions they were asked, inspiring visible frustration from a few of the lawmakers. At the top of the hearing, Chairman Mark Takano, D-Calif., repeatedly asked why the companies take so much longer to remove manipulated or spoofed content than they do to remove copyrighted content. After a bit of a back and forth with each, Takano concluded with “I still haven’t heard a direct answer to my question, my time is up.” 

Rep. Conor Lamb, D-Pa., noted that Facebook made more than $17 billion in revenue in the last quarter, but he also did not get a detailed answer from the Facebook rep regarding how much money and resources the company spends on tools and employees to solve the disinformation issues. 

“Very, very large amounts, Congressman,” Gleicher said. Lamb asked again for a specific amount and though Gleicher said he could only offer more details later, he added “the key question for us isn’t do we have enough resources. The question is how can we most effectively deploy what we can get to make sure that we tackle this problem.”

“OK, I’m glad that’s the question for you. My question was whether you do have enough resources, so we’ll see if we can find that out,” Lamb said. 

Rep. Lauren Underwood, D-Ill., also pressed the company representatives on content moderation and how long it takes the platforms to respond to those who report that they are victims of online impersonation. Kane said he did not have a specific timeframe and Gleicher said it could take a matter of days at Facebook, but it also depends on who reported the issue. Both said the companies do not include specific time frames around their responses in the transparency reports on enforcement that they create. 

“Well I think that that might be something that might be worthwhile to consider for both companies going forward, given the scale of this problem in our country and the way that it has really spread through multiple lines of victims,” Underwood said. 

At the end of the panel, Takano also noted that though Congress did not hear from law enforcement officials on the panel, they are also an integral part of the solution. The FBI declined to offer up someone to testify on the panel and also declined to provide Nextgov with a comment on why. Going forward, Takano said the committee is also scheduling a closed briefing for members and staff from the FBI, to learn how they and other agencies are engaging with social media platforms. Eventually, the committee will decide whether or not legislative action is needed to address policy gaps.

The chairman added that while he does not doubt the social media platforms' sincerity or commitment to the issue, “more can and must be done to protect veterans’ voices.” Expert panelists also expressed their agreement.

“I do want to recognize Twitter and Facebook’s efforts at taking them down and I think those efforts are paying off—but so far, we are still in the crest of the wave,” Barash said.