Google Is About to Have Even More Control Over the Images You Find on the Internet

Uladzik Kryhin/Shutterstock.com

The new functionality represents a shift in the way Google surfaces and displays images for its users.

Google announced Thursday that it’s bringing Google Lens, the Android feature that identifies objects in pictures, to its Google Image Search on the mobile web.

This means that you’ll be able to select parts of an image to further specify your search. If you’re looking at photos on Google of fall fashion and like someone’s boots, you’ll draw a circle around the person’s boots and Google will try to find images of similar boots. For certain categories of products, Google will automatically recognize products and objects, as well as suggest places where you can buy the items online.

“Users wanted to be able to do more than just see the images, they wanted to be able to take action,” said Cathy Edwards, head of Google Images, in a conversation with Quartz. “A lot of times you don’t have the words to articulate what you’re actually looking for.”

The new functionality represents a shift in the way Google surfaces and displays images for its users. Rather than just searching for keywords attached to an image, Google Lens creates a new way that images are judged against each other. It’s another layer in between the creators of the images and Google users. And now that Google controls that layer of how images relate to each other, it has more opportunity than ever to personalize and manipulate which images you see on the internet.

For most cases, this can be an invaluable tool and potentially set a new standard for how images are surfaced on the internet. It’ll be easier to make an initial Google search and then quickly narrow it down to what you really want, even if you can’t articulate it.

But Lens also consolidates the search company’s ability to arbitrate the world’s information. Google is interested in bringing more context into images by automatically recognizing objects like landmarks and offering more information if that portion of the image is tapped on. It’s another way of providing more information around images, which Google has struggled to keep free of misinformation in the past. Last year Google faced criticism due to its inability to keep fake news and conspiracy videos off of YouTube, as well as it’s main search function.

“We understand our responsibility in society,” Edwards said. “And we definitely consider in our ranking many different factors of authoritativeness to try to make sure we’re connecting users to high-quality, authoritative information.”

But search’s effectiveness isn’t always only judged on what people see, but also what they aren’t shown. In the past Google has turned off certain elements of its visual AI tools as political and ethical cover, silently changing the way it surfaces information due to shortcomings in its code. After a 2015 gaffe where it categorized black people as gorillas, Google turned off the ability for Google Photos to categorize gorillas. (A Quartz test confirmed it still hasn’t been turned on for Google Photos since the Wired article in January 2018.) This iteration of Google Lens for image search—which is separate than Google Photos—will include searches for gorillas due to more context available around images.

Google is rolling out Lens for Google image search today, but is only available in the United States in English for now.