Brain-machine interfaces: how do computers read our thoughts?

Brain machine interfaces how do computers read our thoughts

“Brain-machine interfaces” (BCIs) seem straight out of science fiction. These devices aim to control a computer, a robotic prosthesis or even a paralyzed limb thanks to… the user’s thoughts. The concept was born in the 1970s at the University of California and quickly interested the American army, in particular the Defense Advanced Research Projects Agency (DARPA), which has funded many projects since then. The first human clinical trials took place in the 1990s and the results, initially limited, became increasingly impressive.

In recent years, the technology has grown in popularity, especially when Elon Musk launched his start-Up Neuralink in 2016. If the billionaire’s company is sued by an animal rights association for carrying out experiments on at least 23 monkeys of which only seven would have survivedshe announced, Friday May 26to have received authorization from the FDA, the US Food and Drug Administration, to carry out its first clinical study in humans.

But the ICMs have above all earned their stripes thanks to teams of scientists and doctors who have achieved impressive feats. Among them, a Franco-Swiss team which published a study, Thursday, May 25, in the prestigious journal Nature. In this work, researchers from the Atomic Energy and Alternative Energies Commission (CEA), the Federal Polytechnic School of Lausanne, the University of Lausanne and the Vaud University Hospital Center, announced that they had restored, thanks to a “digital bridge”, the communication between the brain and the spinal cord – and therefore the legs – of a paraplegic patient. The latter was able to walk again and even climb a few stairs. A result that is still experimental, but spectacular.

Implants on the skull, on the brain or… in the cortex

The primary function of an ICM is to record brain signals. Concretely, when the user imagines that he is performing a movement, the neurons communicate with each other, which generates brain activity. To capture it, three avenues are being explored by researchers. The simplest and least invasive is to place a cap on the user’s head. equipped with multiple electrodes to measure electroencephalogram. “About 90% of the teams in the field are working on this track, notes Guillaume Charvet, head of the brain-machine program at the CEA. As it does not require surgery, clinical trials on humans are much easier to launch” . Nevertheless, the quality of the measured signals remains limited: the further the electrodes are from the cortex, the less precise the signal. Not to mention that the cups are sensitive to movement, which can create artefacts that interfere with the recordings. “They are very interesting for studies in neuroscience, but not necessarily for a daily application for patients”, summarizes the French researcher.

The second track, much more invasive, consists of introducing electrodes directly into the cortex. The implants then make it possible to record the signals with the best possible quality, but in a very limited area of ​​the brain. “The best known are the Utah Array matrices, developed by American teams and tested for the first time in the early 2000s as part of the BrainGate project. They have made it possible to prove the feasibility of fine and precise control, with a very nice demonstration of the use of a robotic arm controlled by thought in 2011-2012”, continues Guillaume Charvet.

But these matrices are not free from flaws. They require a transcutaneous connector that protrudes from the skull – like the Matrix film – which can cause infections. The introduction of implants can also cause lesions of the cortex, and generate, in reaction, gliosis: the proliferation of cells around the electrodes, which will reduce the quality of the signal. Neuralink, which favors an invasive approach, nevertheless hopes to improve these implants, in particular by miniaturizing them so that they “merge into the neural lace” and by removing the transcutaneous connectors.

The last method, called “semi-invasive”, consists of depositing the electrodes on the dura mater, the membrane that surrounds and protects the brain. It involves surgery to drill holes in the skull. “Our implants replace the pieces of bone removed and have small fins that prevent them from pressing on the brain,” explains Guillaume Charvet, who developed the Wimagine electrodes. If the operation is heavy and seems reserved for people with significant disabilities, the method nevertheless allows good precision and avoids the pitfalls of invasive implants. It could therefore prove to be more viable in the long term.

Artificial intelligence to the rescue

Once the signals from the neurons have been recorded, they must then be transmitted to a computer, which must sort through the brain signals, eliminate those that are not interesting, then analyze those that remain, before transforming them into a command: raise a leg, a robotic arm, pronounce a sentence with an artificial voice, write a word, etc. It is not a question of “reading” our thoughts, but rather of associating a signal with intention of action. For this, the researchers use artificial intelligence algorithms, capable of recognizing brain signals and predicting their meaning as well as possible. Most of the time, these systems operate in a closed loop, which allows the user to get used to the ICM, for example by observing the result of his cerebral command and then adapting his thinking. Little by little, he manages to refine the precision of the action.

The quality of the device depends both on the fineness of the recording – linked to the type of implants used – and on the algorithms. Thus, the more precise a recording is, the more an artificial intelligence program will be able to deduce the intention, but also the amplitude or the force of the desired movement. Speed ​​is also a major issue, in order to avoid any unpleasant delay between the will of the subject and the performance of the action. Hence the interest in developing algorithms capable of reducing this delay as much as possible by predicting the desired action. “During our experiment, our objective was not to exceed a latency of 300 to 500 milliseconds”, indicates for example Guillaume Charvet. An honorable score, but which still needs to be improved.

The augmented human

If most of the devices developed today are intended to allow disabled people to regain autonomy, the potential applications are much broader. Even if their final objective remains purely medical, the Franco-Swiss researchers have, for example, developed “for fun” a software allowing their patient to control a drone by thought.

The US Department of Defense has, for its part, already funded research aimed at developing drones for military use. And several studies scientists demonstrate the possibility of controlling social network applications, emails, virtual assistants or instant messaging services. Research into home automation – dim the lights, change the television channel, increase the heating – should soon follow. According to the specialized site Buitin, brain-computer interfaces are a $1.74 billion market today, but are expected to reach $6.18 billion by the end of the decade.

The question therefore seems less to know if the ICM will become a reality, but rather when and for whom? Because these devices raise many medical and ethical questions. Thus, if the benefit-risk of a semi-invasive technology allowing a paralyzed patient to walk again is established, that of an invasive implant to control his smartphone seems less obvious. But between these two extreme cases, the nuances are numerous, and will not fail to provoke much debate on the interest of “improving” the human being, and to what extent.



lep-life-health-03