Google has evolved its search engine by combining text and image in its Lens service with the Multisearch function. Objective: to display more relevant results than with natural language queries.

Google has evolved its search engine by combining text and

Google has evolved its search engine by combining text and image in its Lens service with the Multisearch function. Objective: to display more relevant results than with natural language queries.

Recently, Google integrated its Google Lens service directly into the Chrome browser on Windows and macOS (read our article on Google Lens in Chrome). Previously reserved for Android and iOS apps, Google Lens allows you to carry out a request from an image taken on the spot or unearthed on the Web, for example. Doped with artificial intelligence, the service analyzes the content of the snapshot and then finds matches on the Internet. It can detect several elements in the same image in order to identify them (personalities, animals, objects, plants, buildings, etc.). Today, the American giant is pushing the level of web search a step further. Through Google Lens, it offers to refine the results obtained by image search using details in the form of text without, of course, that the search engine does not turn into a gas plant. This function, called Multisearch, was unveiled in September 2021. Google is now starting to deploy it in beta version on Android and iOS, but only for American users for the moment.

How does Google Multisearch work?

To illustrate its new function, Google offers a concrete example. A user photographs a dress that he would like to find on the Web. A short passage through Google Lens immediately gives results with dresses that are totally similar or approaching the style. To find out if the dress exists in other colors, all he has to do is indicate in Google Lens that he would like to find the same model in green, for example. And the engine to display the results unearthed online. Practice !

©Google

Google specifies that Multisearch can apply to other domains. One can for example ask to find a pattern printed on a fabric on other items of clothing. Even better, by submitting a photo of a table, one can ask Google to find matching chairs. Or, by proposing the image of a plant, it is possible to learn about its maintenance. The field of application of this new function is therefore really very broad. It completes the arsenal of tools offered by Google and which are not always relevant with natural language research. By combining image and text, the engine could prove to be even more efficient.

33337894
©Google

However, it will still be necessary to wait before seeing Google Multisearch deployed in Europe. The conference Google I/O from the American giant, which will take place on May 11 and 12, could be the occasion for its worldwide launch.

ccn3