Alexa will be able to imitate voices, even of missing persons

Alexa will be able to imitate voices even of missing

Alexa could have a new function that is as spectacular as it is disturbing. Amazon’s voice assistant is said to be able to imitate human voices from recordings. And even reproduce the voice of a deceased loved one.

During its re:Mars conference which took place in Las Vegas, Amazon made an extraordinary demonstration which is likely to spill a lot of ink and generate controversy. The American giant thus presented the novelties under development for Alexa, its intelligent voice assistant, integrated into its Echo products and in a good number of connected objects, but also available in the form of a mobile application. Rohit Prasad, vice president and chief scientist of the Alexa division, took the opportunity to unveil a new function: voice imitation. By analyzing simple audio recordings – not necessarily very long, a minute would suffice – and using artificial intelligence (AI), Alexa would be able to reproduce any voice, by synthesizing it. The assistant could thus respond with the voice of a celebrity or a loved one. A real technological feat, which should give rise to jokes of a new kind – we imagine indeed that many would love to hear Alexa answer with the voice of their favorite star.

But, during his demonstration, Rohit Prasad took another example, as moving as it was disturbing, asking Alexa to reproduce the voice of a deceased grandmother to tell a story – the book of the Wizard of Oz – to his grandchild. In fact, the technology developed by Amazon makes it possible to analyze and imitate any voice, including that of a missing person. “We are unquestionably living in the golden age of AI, where our dreams and our scientific fictions become reality”, declared the person in charge of Alexa, by rejoicing that this function makes it possible to maintain the memory of missing relatives. Admittedly, the feat is really admirable on a technical level: it even represents a kind of Holy Grail for some technophiles. But we can’t help but imagine its derivative uses, especially if some people use it for usurpations. And the possibility of making the dead speak by making them utter words that they have never held is disturbing, if not worrying. Because this function goes much further than the simple reading of a recording: it opens the way to invention and diversion of all kinds. And could do damage if it is added to the techniques of Deep Fake, which already allow images to be diverted.

Amazon has not indicated if and when this spectacular, and even revolutionary, function will be available. But we can bet that she will make people talk…

ccn1