You can tweet your selfies to the network’s Twitter bot for evaluation.
Scientists are using machine learning to empower self-driving cars, autonomous robots, and even smarter Google searches. Now a computer has been tasked with an arguably grander task: figuring out how to take the perfect selfie.
Researcher Andrej Karpathy, a PhD student at Stanford working in theComputer Vision Lab, trained a neural network computer to determine what constitutes a “good” selfie, and now you can tweet your selfies to the network’s Twitter bot for evaluation.
Neural networks are computer systems inspired by the structure of the human brain. Rather like the DeepDream neural network that Google built to create bizarre, mutated dog-based art, Karpathy’s program analyses millions of images, breaking down each one into layers of shapes and colors that the program analyses. Karpathy wrote in an Oct. 25 blog post that he fed his own system 2 million selfies from around the web—though it’s not clear where exactly they came from—to build up its knowledge.
Neural networks learn to identify images through repetition—if you show the network hundreds of photos of dogs and cats, it’ll be able to give you a percentage-value guess on what’s in the next image you show it. If you show the network a cat but tell it that it was actually a dog, the network will recalibrate its layers to ensure that next time, it’s more likely to say dog. As Karpathy writes: “Then we just repeat this process tens/hundreds of millions of times, for millions of images.”
Karpathy told Quartz that he wrote a program to pull 5 million images from the web that were tagged “#selfie.” He used another neural network to filter the images down to contain at least one face, which was about 2 million of the images. Then the neural network learned to evaluate whether a selfie was “good” or not according to how many likes they had.
The neural network pored over the photos over one night, looking at each photo “several tens of times,” Karpathy said. He then showed the program 50,000 photos that it hadn’t seen before to test out its new knowledge database. What emerged were some interesting insights on what computer thinks humans will like in terms of selfies:
- We like women. The top 100 selfies the neural network chose were all women, most of which had long hair. The program also tended to prefer selfies that cropped out foreheads, for some reason.
- Selfies should be mostly face. All of the best selfies seem to show the person’s face taking up about one third of the photo, with the head tilted slightly.
- Distort the photo. Karpathy noticed that the vast majority of popular selfies had oversaturated the face in the photo, added some sort of filter, and probably added a border of some sort around the image.
Karpathy’s program also found some traits to avoid: Don’t take a photo in low light, get too close to the camera, or take a group shot. Selfies, it seems, should indeed focus on the self. Karpathy also built out a Twitter program that lets users upload a photo, and using the same neural network, find out how good their selfie game is. (It did not think very highly of me.)
Karpathy admits in his post that likes may not be the best metric for ascertaining selfie “quality”, but neural network seems to suggest that the internet favors close-up photos of young, light-skinned women. Karpathy also ran the program on a batch of selfies solely from celebrities, and the top photo it pulled out was of model and actress Rosie Huntington-Whiteley, who fits that bill exactly.
Hopefully as machines get more intelligent, they’ll learn from our biases, instead of copying them.