Artificial intelligence could be a way to slow its spread.
The world has a fake news problem. And with the speed and scale of false information being spread across the internet, it can seem impossible to stop.
Artificial intelligence could be a way to slow its spread—and help stop companies profiting from intentional or accidental dissemination.
The method will be complex but the idea is simple. AI can be trained to identify fake news and gather data on sites that are the most prolific in peddling it—or just sloppy in their fact checking, experts say. Lists—somewhat like risk-management reports—can be made for companies that are looking to advertise on credible sites, as well as for those looking to invest in reliable publishers.
Dhruv Ghulati, CEO and founder of Factmata, was presented an award by the UK prime minister for his company’s role in using machine learning to tackle fake news. He pointed out to Quartz that sites that publish misleading or unvetted information are “often financially driven—fake news stories correlate highly from a linguistic perspective to clickbait stories.” According to researchers at Sofia University in Bulgaria and the Qatar Computing Research Institute, clickbait is up to 98% correlated with fake news.
With this in mind, these experts said there is at least one tangible option that can adopted right now: “Defunding deliberate, proactive news articles which damage public health or conversation and even could be life threatening should be the case,” said Ghulati.
Mevan Babakar, head of automated fact-checking at the UK nonprofit Full Facts, told Quartz: “This is an effective route for certain kinds of false content. When advertising platforms were inadvertently creating an incentive for made-up content by allowing anyone to make money from adverts on content regardless of whether it was made up, it made sense to fix that unintended consequence.”
Harvard University’s Nieman Lab has noted the links between fake news and ad revenue before. Google and Facebook have also said they’re trying stop fake news sites siphoning ad money through their platforms by amending their AI and employing human fact checkers.
Ghulati insisted that the data wouldn’t be biased towards a platform or a brand—it would simply provide the information for investors or advertisers to make up their own minds. Babakar said that this could be a realistic way of limiting distribution of very low quality content, by acting as a “spam filter.”
It’s only one piece of the puzzle
Humans will still need to play a vital role in fact checking, in tandem with technology, to address the more complex claims of bias we see in political debate. “Above all, we all have to choose what to trust for ourselves. Charities, businesses, governments can all do things to make that easier. Free speech is key. Governments shouldn’t be controlling what people say,” said Babakar.
Fake news isn’t always spread with malicious intent, which is why fact-checking needs to be done at the source. “For the most part, we imagine it’s well-intentioned people inaccurately sharing things,” says Babakar. She added that presentation of cherry-picked information is what leads to misleading articles and statements posted by hyper-partisan outlets.
This is where automated fact-checking can come in. Unlike the AI risk-management reports flagging sites for disseminating fake news, Babakar says that the fact-checking technology Full Facts is developing is a way for news organizations and other outlets to check information more granularly and in real-time. The aim is to stop false information from being published.
The approach is “very specific, very targeted, and requires a deep knowledge of the underlying sources and issues to develop,” she says. Reviews are very honed-in, looking, for example, at statistics floated by politicians about the National Health Service, immigration or other specific topics.
The prototype is already being used in-house and Full Facts is hoping to offer its technology to news sites and other outlets in October this year. Fact-checking organizations across the world are also adopting automated tools, as highlighted by The Reuters Institute for the Study of Journalism at Oxford University. The institute noted development for automated fact-checking has accelerated over the past couple of years and outlets such as Associated Press and the Washington Post are looking into how to adopt these tools.
Here’s how automated fact-checking can work
When two opposing politicians cite statistics on poverty, with one saying poverty is up while the other says poverty is falling, both can be right when you look at absolute versus relative numbers. Automated fact-checking can examine repeated claims, document where they have been cited before and whether there is evidence to back up the statements, as demonstrated here:
The following video also demonstrates another part of this process, allowing key words and statistics from speeches and statements to be checked against the facts.
“It’s not like just classifying whole web pages as either ‘true’ or ‘false’ but what it can do in specific cases is give people the information they need to make their own minds up when they need it,” said Babakar.
AI isn’t going to be the cure-all in identifying and limiting the impact of fake news. The technology itself has been partly to blame for the spread of false information, especially across social media platforms like Facebook. Still, the hope is that the very weapon that has been used to disseminate misleading information can be used to curb it. One element of that solution will depend on using AI to penalize publishers by hitting them where it hurts—their wallets.