Google Lens is evolving and has a new video search function, which allows the tool to analyze short sequences. This promises to push our interactions with our immediate environment even further!
Over time, Google Lens has become an indispensable tool. Launched in 2017, it initially allowed you to search on an image to find similar photos on the Internet. But it now offers much more: Google’s algorithms analyze the content of the image and extract a lot of information. It can even translate texts in real time! First integrated into Google Assistant and Google Photos on Pixel smartphones, it expanded to Android and iOS, then to the Chrome browser, to finally land on all web browsers, including PC (see our practical sheet) .
However, Google Lens has until now been limited to recognizing still images. The tool has since integrated generative AI capable of providing a complete response to uploaded images, as well as Google’s popular Circle to Search feature. But it is preparing to offer us much more with video search, announced during the last Google I/O. Enough to radically transform our way of discovering and interacting with our environment!
You can now send a video to Google to ask questions about it!
If you open Google Lens on Android and hold down the shutter button, it’ll record a short video that you can ask a question about.
If you’re in a region where AI overviews are enabled, then you’ll get an AI-generated pic.twitter.com/qeGWy6u1TM
— Mishaal Rahman (@MishaalRahman) September 30, 2024
Google Lens video search: a feature currently being rolled out
One of our colleagues fromAndroid Authority noticed the arrival of this new function. Its principle is extremely simple. Simply launch Google Lens, hold down the shutter button to record a short video of the object or scene of interest, and then ask questions out loud about the footage. Google then analyzes the video content and provides the most relevant search results. Note that, in regions where “AI Overviews” are available, users will even be able to benefit from a response generated by the Gemini AI.
This new function will allow us to go beyond analyzing still images and interact more effectively with our environment. This opens up new perspectives, such as solving technical problems.
Although video search was announced last May, it is only beginning to roll out to a select number of English-speaking users residing in the United States. It is expected to gradually expand to other regions in the coming months.