AI: researchers have (finally) found a real use for GPT-3

AI researchers have finally found a real use for GPT 3

Advances in artificial intelligence (AI) fascinate as much as they worry. With the development of language models like GPT-3 and ChatGPT – a less powerful but more conversational version – the world may soon find itself inundated with fake news and entertainment created entirely by authors who are actually… machines. .

Fortunately, algorithms also have their uses. Ask a powerful artificial neural network to summarize what quantum physics is all about, and it will do it clearly. Not because he understands anything about the behavior of particles, but because he is able to imitate humans and put together learned sentences ingested during his learning.

Likewise, automatic language processing could also, in the future, detect early the neurodegenerative diseases which today constitute an international challenge. Research conducted at the School of Biomedical Engineering, Health Sciences and Systems at Drexel University in Philadelphia (USA) has just demonstrated that the GPT-3 model could identify, in speech spontaneous, indices capable of predicting the first stages of Alzheimer’s, with 80% efficiency.

Of course, these results will need to be confirmed by further research. However, for scientists, the enormous potential of AI in medical diagnosis is no longer in doubt. An article published in November in The Lancet Digital Health recalled the impressive performance of an algorithm capable of detecting early signs of Alzheimer’s from 12,132 images of the retina.

Avoid the black box

This capacity for analysis now extends to our verbal exchanges. “Our AI model could be deployed in the form of a voice application used in private practice to help doctors with screening,” the researchers from Drexel University already imagine. The deployment of this kind of assistance tool could however take time due to the ethical and technical issues raised.

“Ideally, there should be the widest possible database, including many different languages, to ensure that the models work fairly for all patients, regardless of age, gender, ethnicity. , or their nationality,” the researchers point out. The protection of voice data used during screening tests is also a major issue because it can be used to identify individuals.

Finally, confidence in AI must still progress if we want to apply it to the world of health. Currently, developers cannot always know what information the algorithms used to arrive at their conclusion. This is the famous “black box” problem. This often happens in machine learning models but this lack of transparency is unacceptable when it comes to diagnosing dementia.

lep-general-02