Once the heady stuff of films like Minority Report, facial recognition algorithms are now a part of so many technologies that it’s hard to keep track of them. iPhones, iPhoto, and Facebook all try to detect bodies and faces with biometric software. Just last week, Facebook announced that its proprietary algorithm correctly identified faces 97.25 percent of the time—meaning it works almost as well as humans do.
But when these mathematical engines hunt for a face in a video, just what do they look for? How does software see a face?
The artist and writer Zach Blas has tried to show just what programs see with his project Face Cages. He’s constructed, literally, face cages—metallic sculptures that fit, painfully, onto a user’s face—that represent the shapes and polygons that algorithms use to hunt for faces.
And the London-based artist James Bridle has made a“surveillance spaulder”: A shoulder pad that alerts wearers when they’re under the gaze of a surveillance camera.
All these are little projects of disrupting or reminding—tools to frustrate the software, or to alert people to the its existence. Part of Blas’s goal here is maybe even more political: He would like to remind viewers that faces vary wildly, and that algorithms can’t capture their full diversity.
“Asian women’s hands fail to be legible to fingerprint devices; eyes with cataracts hinder iris scans,” he argues. To Blas, besides all their other possible faults, facial-detection algorithms exacerbate all the differences that people are already punished for. The perfectible accounting of the algorithm—97.25 percent efficiency!—when forced onto a fleshy face recalls, for Blas, “handcuffs, prison bars, and torture devices used during slavery.”