Giving speech to mute people: the feat of American researchers

Giving speech to mute people the feat of American researchers

Two cables are screwed to Pat’s head, connected to a computer. A scientist films the session: it’s a world first. For the past few days, this 68-year-old American, equipped with sensors implanted in her brain, has been able to speak again. She was mute until then, because of Charcot’s disease which has been paralyzing her little by little since her diagnosis in 2012. To express herself thanks to the machine, the former HR must think very hard about a sentence, as if she was still able to say it. While the words only came out in the form of incomprehensible sounds, they spring, pronounced by a synthetic voice.

Broadcast this Wednesday, August 23, this sequence, resulting from a series of experiments carried out by Stanford University in the spring of 2022, marks a turning point in the development of brain-machine interfaces: after the demonstrations of people browsing the Internet or piloting exoskeletons by thought, a team of scientists has thus succeeded in partially restoring speech to people who were deprived of it, thanks to these electrodes which make it possible to control computer programs directly by force of the mind. A feat.

By implanting four small metal rods in the cerebral cortex of their patient, Francis Willett and Jaimie Henderson, the two main authors of a study published this Wednesday in Nature, allowed their patient to pronounce, through a computer, up to 60 words per minute. A record. Until now, such an operation, carried out a few years ago on another volunteer, had only allowed him to “say” 15 words per minute, at the cost of almost insurmountable effort. A pace far too slow to really hope to improve the living conditions of the sick, because a normal person pronounces at least 160 words in this period of time.

A record immediately surpassed

With such efficiency, Stanford researchers demonstrate that the neural implants imagined by many start-ups, including Elon Musk’s Neuralink, represent a realistic prospect. They could eventually be marketed and allow patients to communicate again. Especially since barely revealed, these results have already been beaten by another research team, from the University of San Francisco this time. These researchers pride themselves on having obtained a tempo of 73 words per minute, using similar techniques, on an elderly patient this time in her forties, who had lost her speech ten years ago. -eight years following a stroke. They too were entitled to a publication in Nature.

To achieve this speed of diction, this other team, led by scientists Sean L. Metzger and Edward Chang, used electrodes that were a little less precise but positioned on the surface of the brain. “This technique may lead to more word errors, but could yield longer-lasting results. The deeper the electrodes are implanted, the sooner one risks losing signal, as brain tissue wraps around the devices, and we lose the signal in a few months”, deciphers Blaise Yvert, who works on similar developments at the Institute of Neurosciences in Grenoble.

Contrary to what is sometimes conveyed in the press, the sensors used during these various experiments do not make it possible to read thoughts directly, insists the specialist, annoyed by these erroneous representations, sometimes used as foils. Only the motor, physical will, the order to move the face to speak, is decoded. “There is nothing magic. The researchers give a sentence to say, then record the corresponding electrical brain activity. This signal varies according to the words chosen: we do not move the lips, the tongue and the jaw of the same way. Then an algorithm is trained to recognize these signatures”, details Balise Yvert.

So far, these teams have only worked with two patients. Is it possible to repeat it on other patients? “It’s difficult to know in advance what our technologies would look like on other people, because some can lose the brain capacity related to language by not using it”, warns Jaimie Henderson, in a press conference. . The researcher grew up alongside a father himself deprived of language. “I dreamed of having the power to give him the floor again. The circle is complete,” he says, presenting his work to an audience of international journalists.

These discoveries mark an acceleration in the field of brain-machine interfaces. “We are in the infancy of these technologies, but we believe that we could already transfer our algorithms to other people, without having to train them for as long as our first patients,” said Frank Willett. On August 15, researchers boasted in Plos Biology to have been able to reconstitute a Pink Floyd song from cerebral electrical signals of people with epilepsy, in whom these implants were intended to determine the origin of their illness. They had accepted in passing that the researchers make them pass all kinds of tests, including listening to the mythical group. On May 1, a team succeeded in summarily deciphering the signals of the language of people subjected to MRIsthus confirming that different less invasive technologies can eventually be used.

lep-life-health-03