Published on
Updated
Reading 1 min.
Researchers at Cornell University in the United States have developed glasses capable of identifying commands issued to their smartphone in silence, simply by analyzing the mouth movements of the person using them.
Developed by the SciFi laboratory (Smart Computer Interfaces for Future Interactions) at Cornell University, this device was designed to be able to unlock and use your smartphone in all circumstances, including when there is a lot of parasitic noise (in a stadium or a nightclub) or, on the contrary, in a space where silence is required (in a library).
From then on, there is no longer any need to pronounce a command aloud, just move your lips so that the glasses serve as a relay between you and the smartphone. It is a question here of silently pronouncing the access code to your phone or commands such as “louder”, “forward” or “stop” to control your favorite playlists lip service.
As it was thought, this concept is relatively compact and, above all, consumes very little energy. These glasses, called EchoSpeech, work on the sonar system, ie it emits and receives sound waves through the face, which detect the slightest movement of the mouth. Depending on the type of echo, the artificial intelligence can identify the request launched. Only a few minutes are enough for the artificial intelligence to recognize around thirty commands and numbers to execute.
Still at the prototype stage, this initiative could one day be marketed and then considerably simplify the lives of people with speech problems. Better still, this technology could even one day be associated with a voice synthesizer and then downright give voice to people who are unable to express themselves normally.