Google Lens Launched Within Google Images Search: Real-Time Content Navigation Inside Photos

On October 25, 2018, Google announced a new advanced integration for Lens within Google Images Search. As part of a series of updates to Google Images user experience, this new visual tool can identify components of an image which the audience may want to learn more about.


Google first introduced Lens functionality in Google Assistant and Google Photos back in May 2017. This month’s large-scale rollout for Search utilizes the same machine learning to analyze an image. Tapping on “dots” designates objects Google Lens has pre-identified, or the user can draw their own border and select within an image itself. These actions trigger Google Images to browse related images, web pages, and even videos.

Beginning October 2018, Lens dots in Google Images will mostly appear for products. In the coming months, the dots will increasingly appear amongst a variety of image subjects. Brands should leverage this additional exposure by evaluating landing page content to optimize click-through-rates. Text can be recognized on object surfaces.

Rather than passively scanning a business card for a QR code, potential clients may dynamically interact with the world around them. Trade show displays and branded items could function directly as a referral source for related content. Considering this, patient audiences may eventually be able to scan a medical device and photos to learn more about certain conditions or resources.


To date, this Google Images feature is live for U.S. mobile web users searching in English. Additional countries and languages will be incorporated in the coming months. While there not yet a specific SEO schema to rank better for Google Lens, the expanding application of sophisticated AI-assisted search proves it is more important than ever to consider best practices for Image SEO.


By: Valerie Lentz
Topics: Google Images Google Search Machine Learning SEO