The social media giant will adopt a warning-only approach toward deepfakes.
Fake videos and doctored photographs, often based on events such as the Moon landing and supposed UFO appearances, have been the subject of fascination for decades.
Such imagery is often deepfake content, called so because it uses deep learning associated with neural networks and digital image processing.
The company proposed it would warn users about deepfake content by flagging tweets with “synthetic or manipulated media”. Twitter says media may be removed in cases where it could lead to serious harm, but has stopped short of enforcing a strict removal stance. Users have until November 27 to provide feedback.
In adopting this warning-only approach towards deepfakes, the social media giant has shown poor judgement.
Why Deepfakes Are Dangerous
With advances in computer science, deepfakes are becoming an increasingly powerful tool to deceive people using social media.
Deepfake clips of celebrities and politicians are realistic enough to trick users into making financial, political and personal decisions based on the fake testimony of others.
Whether it’s a David Koch erectile dysfunction cream scam, an announcement by Donald Trump that AIDs has been eradicated, or a fake interview with Andrew Forrest leading to a finance scam, deepfakes present a serious risk to our ability to trust what we view online.
Social media companies have so far taken a sloppy approach to this threat. They have even promoted the use of photo algorithms letting users experiment with animated face masks, and provided tutorials on how to use editing programs.
Twitter’s latest draft policy on deepfakes sets a dangerous precedent. It allows social media platforms to handball away their responsibility to protect customers from manipulated videos and imagery.
Twitter Should Be Just as Accountable as Television
It’s time social media giants such as Twitter started seeing themselves as the 21st century version of free-to-air television. With TV, there are clear guidelines about what cannot be broadcast.
Since 1992, Australians have been protected by the 1992 Broadcasting Services Act, ensuring what is shows in “fair and accurate coverage”. The act protects viewers in regards to the origin and authenticity of television content.
The same principles should apply to social media. Americans now spend more time on social media than they do watching television, and Australia isn’t far behind.
By suggesting they only need to flag tweets with deepfake content, Twitter’s proposed policy downplays the seriousness of the threat.
Sending the Wrong Message
Twitter’s draft policy is dangerous on two fronts.
Firstly, it suggests the company is somehow doing its part in protecting its users. In reality, Twitter’s decision is akin to watching a child struggle to swim in heavy surf, while nearby authorities wave a sign saying: “some waves may be hard to judge” - instead of actually helping.
The second reason Twitter’s proposition is dangerous is because social media trolls and sock puppet armies enjoy surprising online audiences. Sock puppets are specialists in deceiving users into believing they’re a single fake person (or multiple fake perople) by means of false posts and online identities.
Basically, content that has been signposted as deepfake will be exploited by people wanting to amplify its spread. It’s unrealistic to suppose this won’t happen.
If Twitter flags posts that are fake, yet leaves them up, the likely outcome will be a popularity surge in this content. As per social media algorithms, this means a greater number of fake videos and images will be “promoted” rather than retracted.
Twitter has an opportunity to take a leadership role in preventing the spread of deepfake content, by identifying and removing deepfakes from its platform. All major social media platforms have the responsibility to present a unified approach to the prevention and removal of manipulated and fake imagery.
The circulation of a Nancy Pelosi deepfake video earlier this year revealed social media’s inconsistency in the handling of deceitful imagery. YouTube removed the clip from its platform, Facebook flagged it as false, and Twitter let it remain.
Twitter is in the business of helping users repost links and content as many times as possible. It creates profit by generating repeated referrals, commentary, and the acceptance of its content through promoted trends.
If deepfakes aren’t removed from Twitter, their growth will be exponential.
A Looming Threat
Early versions of such spurious content were relatively easy to spot. People in the first deepfake clips appeared unrealistic. Their eyes would’t blink and their facial gestures wouldn’t sync with the words being spoken.
There are also examples of harmless image manipulation. These include web apps on Snapchat and Facebook that let users alter their photos (usually selfies) to add backgrounds, or resemble characters such as cute animals.
However, this new generation of altered imagery is often hard to distinguish from reality. And as criminals and pranksters improve their production of deepfakes, the other side of this double-edged sword could swing at any time.
David Cook is a lecturer of Computer and Security Science at Edith Cowan University