Just as the technology can be used to help deliberately spread falsities online, it can also be tapped to stop that spread.
Air Force personnel are set to soon be equipped with machine learning-driven capabilities to better counter COVID-19-connected disinformation.
Building on a prior falsehoods-fighting pursuit and as part of a larger solicitation launched in April, the branch recently awarded machine intelligence startup Primer a $1 million, phase II Small Business Innovation Research, or SBIR, contract that’ll underpin the integration of new solutions.
The work is set to be formally announced in November, Nextgov confirmed late last week.
“In any crisis, it is hard to verify information quickly,” Primer’s CEO and Founder Sean Gourley explained. “In a global crisis like the pandemic, where certain countries are actively spreading disinformation, speed and scale are the biggest issues, which machine learning can help with.”
Gourley and several other officials from Primer briefed Nextgov via email on the technological solutions at the heart of the 12-month engineering effort and shed light on the swell of pandemic-related disinformation efforts the military is vigorously confronting.
Disinformation refers to falsities spread with the deliberate intent to deceive and mislead. Efforts to disseminate it globally only seem to be growing in sophistication—and amid the pandemic, a swell of disinformation narratives are reportedly being pushed further forward by America’s adversaries.
When the novel coronavirus physically disrupted the nation’s health care system and supply chain, it also introduced disruptions across the U.S. information landscape. Loads of baseless conspiracy theories, myths and hoaxes have and continue to circulate across Facebook and other social media platforms, forcing hospitals, organizations and government agencies to at least try to limit their spread. Research revealed more than 150 separate COVID-19-centered stories that incorporated inaccurate or misleading information went viral on social media in the pandemic’s early weeks. Around that time, in late March, Secretary of State Mike Pompeo also detailed the government was tracking information campaigns steered by Iran, Russia, China and others, and not long after that, the Pentagon spotlighted its own moves to curb the confusion-provoking disinformation efforts.
“While this isn’t the first instance of an adversary waging disinformation campaigns during a crisis, the velocity, variety, and volume of COVID-19-related disinformation campaigns is unprecedented,” Primer’s National Security Group Vice President Brian Raymond told Nextgov.
Raymond, a former CIA analyst, noted that misleading reports about disease hot spots could inhibit the ability of Air Force intelligence analysts and operational planners to maintain accurate global situational awareness and inaccurate claims about medical equipment shortages could trigger supply chain disruptions—and ultimately impact logistics.
“The COVID-19 pandemic is not just ‘business as usual plus a global disease,’” he said. “It is qualitatively transforming the risk landscape, making accurate and timely information a matter of life and death.”
For commanders in the field who must make quick decisions, the speed and scale of incoming information can greatly impact their ability to make choices. As the company’s CEO previously noted, machine learning can offer much-needed support. Gourley said, “Since Russian interference in the 2016 U.S. election, the cost of creating and disseminating disinformation at scale has only gone down, and at least 50 countries have this capability.” And, on top of traditional software engineering solutions for running botnets and managing content farms, he said machine learning has also provided tools like deepfake image generation, synthetic voice imitation, and synthetic text generation.
“Twitter is already awash with inauthentic content, and we expect the problems to worsen,” Gourley said. “With these machine learning-powered capabilities so widespread and in active use by our adversaries, we must develop our own ML systems to defend ourselves—anything less is like bringing a knife to a gunfight.”
This latest effort is meant to build upon capabilities the business recently revealed and is developing in partnership with both the Air Force and U.S. Special Operations Command to combat disinformation more broadly.
“For this work, Primer will be integrating bot detection, synthetic text detection and unstructured textual claims analysis capabilities into our existing artificial intelligence platform currently in use with DOD,” the company's Director of Science John Bohannon explained. “This will create the first unified mission-ready platform to effectively counter COVID-19-related disinformation in near-real time.”
Officials are also producing a multi-label COVID-19 machine learning classifier algorithm, which Bohannon said “automatically classifies documents into one of 10 categories to enable the detection of the impact of COVID” on areas such as business, science and technology, employment, the global economy, elections—and more. This has roots back to a site Primer made free to the public early into the pandemic, which tracks and streamlines relevant coronavirus research, including where and how papers and authors appear in the news and on social media sites.
“With this site, Primer found that a paper published in January claiming similarities between COVID-19 and HIV was retracted less than three days later because of significant issues and discrepancies. Yet more than nine months later, this continues to be one of the most tweeted about papers, with much of the conversation conspiracy-based,” Bohannon said.
A wide range of publicly available textual content is being leveraged to train the company’s machine learning platform, and global news coverage makes up much of the core training data.
The final integrated solution is expected to be delivered in the second quarter of next year.
“The best-case scenario for this work is that the Air Force and the U.S. military more broadly, are able to more quickly and effectively identify foreign disinformation campaigns,” Raymond said.