Google Gave the World Powerful AI Tools, and the World Made Porn With Them

ktsdesign/Shutterstock.com

In 2015, Google announced it would release its internal tool for developing artificial intelligence algorithms, TensorFlow, a move that would change the tone of how AI research and development would be conducted around the world. The means to build technology that could have an impact as profound as electricity, to borrow phrasing from Google’s CEO, would be open, accessible, and free to use. The barrier to entry was lowered from a Ph.D to a laptop.

But that also meant TensorFlow’s undeniable power was now out of Google’s control. For a little over two years, academia and Silicon Valley were still the ones making the biggest splashes with the software, but now that equation is changing. The catalyst is deepfakes, an anonymous Reddit user who built around AI software that automatically stitches any image of a face (nearly) seamlessly into a video. And you can probably imagine where this is going: As first reported by Motherboard, the software was being used to put anyone’s face, such as a famous woman or friend on Facebook, on the bodies of porn actresses.

After the first Motherboard story, the user created their own subreddit, which amassed more than 91,000 subscribers. Another Reddit user called deepfakeapp has also released a tool called FakeApp, which allows anyone to download the AI software and use it themselves, given the correct hardware. As of today, Reddit has banned the community, saying it violated the website’s policy on involuntary pornography.

According to FakeApp’s user guide, the software is built on top of TensorFlow. Google employees have pioneered similar work using TensorFlow with slightly different setups and subject matter, training algorithms to generate images from scratch. And there are plenty of potentially fun (if not inane) uses for deepfakes, like putting Nicolas Cage in a bunch of different movies. But let’s be real: 91,000 people were subscribed to deepfakes’ subreddit for the porn.

While much good has come from TensorFlow being open source, like potential cancer detection algorithms, FakeApp represents the dark side of open source. Google (and Microsoft and Amazon and Facebook) have loosed immense technological power on the world with absolutely no recourse. Anyone can download AI software and use it for anything they have the data to create. That means everything from faking political speeches (with help from the cadre of available voice-imitating AI) to generating fake revenge porn. All digital media is a series of ones and zeroes, and artificial intelligence is proving itself proficient at artfully arranging them to generate things that never happened.

Since the software can run locally on a computer, large tech companies relinquish control of what’s done with it after it leaves their servers. The creed of open source, or at least how it’s been viewed in modern software development, also dictates that these companies are freed of guilt or liability from what others do with the software. In that way, it’s like a gun or a cigarette.

And there’s little incentive to change: Free software is good business for these companies, exactly because it allows more people to develop AI. Every big tech company is locked in a battle to gather as much AI talent as possible, and the more people flooding into the field the better. Plus, others make projects with the code that inspire new products, people outside the company find and fix bugs, and students are being taught on the software in undergrad and Ph.D programs, creating a funnel for new talent that already know the company’s internal tools.

“People talk about big breakthroughs in machine learning in the last five years, but really the big breakthrough are not the algorithms. It’s really the same algorithms as the 70s, 80s, and 90s. Really the breakthrough is open source,” says Mazin Gilbert, VP of advanced technology at AT&T and former machine learning researcher. “What open source did was reduce the barrier to entry, so that it’s no longer the IBMs and the Googles and the Facebooks, who have the deep pockets.”

Open source software also complicates the calls for ethics in AI development. The tools that Google offers today are not the keys to creating Skynet or some other superintelligent being, but they can still do real harm. Google and others like Microsoft, which also offers an open-source AI framework, have been vocal about the ethical development of artificial intelligence that would not cause harm, and their on-staff scientists have signed pledges and started research groups dedicated to the topic. But the companies don’t offer any guidance or mandates for those who download their free software. The TensorFlow website shows how to get the software running, but no disclaimers on how to use the software ethically or instructions on how to make sure your dataset isn’t biased.

When I asked Microsoft’s VP of AI, Harry Shum, a few months ago how the company plans to guide those using its open-source software and paid developer tools towards creating ethical and unbiased machine learning systems, he said it’s not entirely clear.

“That’s really, really hard, I don’t think we have an easy solution today,” Shum said. “One thing we are gradually learning is that, as we design machine learning algorithms, we are trying to find the blind spots.”

Google did not respond to similar questions.

Moving AI away from open source isn’t an ideal solution, either. By closing the software, we’d lose a rare view into how these otherwise opaque tech companies develop their artificial intelligence algorithms. Research is published for free on websites like ArXiv, and raw code is shared on Github, meaning journalists, academics and ethicists can find potential pitfalls and demand accountability. And a majority of people are using the AI toolkits for productive uses, like standard image recognition in apps or sorting cucumbers.

It’s not far-fetched to think that soon other kinds of fake videos will make their way to more mainstream platforms like Facebook and Twitter, finding a home amongst the manually-developed political propaganda. And while AI researchers have been meeting to find a potential fix for this outside the purview of large technology companies, it’s unlikely a fix is coming soon. The software is already out there, after all.

Since developers of this core technology will continue to resist being held accountable for what people like the creator of deepfakes are doing, the burden will fall on the platforms where the videos and images are shared. For instance, Gfycat deleted all the deepfakes GIFs it found hosted on its site. Reddit has banned the community. PornHub also told Motherboard that it would be deleting the videos since they were considered nonconsensual use of someone’s likeness. But a website, deepfakes.club, still exists outside the purview of a major social platform.

No matter what happens to the original deepfakes software, this is only the beginning.