Your face is quickly becoming a key to the digital world.
Your face is quickly becoming a key to the digital world. Computers, phones and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows facial recognition software is far from secure.
In a paper presented at a security conference Oct. 28,researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human.
With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100 percent success rates. Researchers had the same success tricking software touted by Chinese e-commerce giant Alibaba for use in their “smile-to-pay” feature.
Modern facial recognition software relies on deep neural networks, a flavor of artificial intelligence that learns patterns from thousands and millions of pieces of information. When shown millions of faces, the software learns the idea of a face, and how to tell different ones apart.
As the software learns what a face looks like, it leans heavily on certain details—like the shape of the nose and eyebrows. The Carnegie Mellon glasses don’t just cover those facial features, but instead are printed with a pattern perceived by the computer as facial details of another person.
In a test where researchers built a state-of-the-art facial recognition system, a white male test subject wearing the glasses appeared as actress Milla Jovovich with 87.87 percent accuracy. An Asian female wearing the glasses tricked the algorithm into seeing a Middle Eastern man with the same accuracy. Other notable figures whose faces were stolen include Carson Daly, Colin Powell and John Malkovich. Researchers used about 40 images of each person to generate the glasses used to identify as them.
The test wasn’t theoretical—the CMU printed out the glasses on glossy photo paper and wore them in front of a camera in a scenario meant to simulate accessing a building guarded by facial recognition. The glasses cost $.22 per pair to make.
When researchers tested their glasses design against a commercial facial recognition system, Face++, who has corporate partners like Lenovo and Intel and is used by Alibaba for secure payments, they were able to generate glasses that successfully impersonated someone in 100 percent of tests. However, this was tested digitally—the researchers edited the glasses onto a picture, so in the real world, the success rate could be less.
The CMU work builds on previous research by Google, OpenAI and Pennsylvania State University that has found systematic flaws with the way deep neural networks are trained. By exploiting these vulnerabilities with purposefully malicious data called adversarial examples, like the image printed on the glasses in this CMU work, researchers have consistently been able to force AI to make decisions it wouldn’t otherwise make.
In the lab, this means a 40-year-old white female researcher passing as John Malkovich, but their success could also be achieved by someone trying to break into a building or steal files from a computer.