How Augmented Reality Will Improve Your Google Searches

How Augmented Reality Will Improve Your Google Searches

Google took advantage of its spring keynote to unveil its new hardware like the Pixel 6a and its Pixel Watch, but also improvements to its search engine. The American giant wants to take advantage of augmented reality to avoid having to type in keywords.

You will also be interested


[EN VIDÉO] How to do a Google search from an image?
Google has a tool with which you can search using an image found on the Internet. A very practical option that allows you to identify places, objects or people with a simple click!

Even though Google does not really have competition in the field of Internet research, the American giant continues to improve its algorithms but also its functions. The goal, very clearly, is to help us find answers without necessarily typing words into a search field. Perhaps, in a few years, the text search will even disappear since it will be enough to speak or show a photo for Google to help us.

For example, there is already visual search with Google Lens. Point your smartphone at a buga flower or a piece of clothing is enough for theartificial intelligence identify it. Tuesday, during its spring keynote, Google revealed that this tool was used 8 billion times a month!

augmented reality research

The idea now is to combine the best of search types (text, voice, image, etc.) and that door the name “Multisearch”. Google gives the following example: using your smartphone, you use Google Lens to identify a dish, a dress or a flower. Google recognizes it. Then, in the search field, by typing “near me”, Google gives the place where you can eat this dish, or afford this dress or these flowers. For the moment, it is available in English, but it will soon be deployed in other languages, including French.

The other novelty is what Google calls “scene exploration”. We could describe this function as the Google Lens wide-angle version. In the photo example given above, the user scans a display, as if he were taking a panoramic photo, and information related to each element of the scene appears directly on the screen. This is augmented reality applied to research.

Interested in what you just read?

fs1