Will Artificial Intelligence replace the psychologist?

Will Artificial Intelligence replace the psychologist

You may also be interested


[EN VIDÉO] How are mental disorders treated?
In this video, Virginie Lacombe, museographer and project manager at the Cité des Sciences et de l’Industrie, describes the comprehensive care offered to patients suffering from mental disorders: medication, therapy, psychoanalysis, etc.

Hello Sir. Please sit down. So… how have you been since the last time? »

What if, in a few years, this innocuous sentence was no longer pronounced by a psychiatrist in the flesh but by an AI, an artificial intelligence? With the recent resurgence of psychiatry in the public debate, in particular because of the health crisisthe idea of ​​proposing mental health monitoring systems integrating AIs has resurfaced.

It is, let’s be honest, far from being new since we find the first trace of a chatbot (dialogue program) dedicated to psychiatry, named Eliza, since 1966. In recent decades, advances in artificial intelligence have enabled the rise of chatbotsrobot therapists” or other detection systems health status through voice.

Today there are more than twenty robot therapists validated by scientific studies in psychiatry. Several of these works suggest that patients could develop real therapeutic relationships with these technologies, and even that some of them would even feel more comfortable with a chatbot than with a human psychiatrist.

The ambitions are therefore great… Especially since, unlike their human counterparts, these “professionals” digital would promise objective, replicable, and non-judgmental decisions — and to be available around the clock.

It should however be noted that, even if the name “robot-therapist” evokes the image of a robot physical, most are text-based, possibly animated videos. In addition to this absence of physical presence, important for the majority of patients, many fail to recognize all the difficulties experienced by the people with whom they converse. How, then, to provide appropriate responses, such as referral to a dedicated help desk?

Diagnosis and internal model in the psychiatrist

The psychiatrist, in his interview with his patient, is able to perceive important signals betraying the existence ofsuicidal thoughts or domestic violence that current chatbots can miss.

Why does the psychiatrist still surpass his electronic version? When this specialist announces You have attention deficit disorder “, Where ” Your daughter has a anorexia mental the process that led him to make these diagnoses depends on his “internal model”: a set of mental processes, explicit or implicit, that allow him to make his diagnostic.

Just as engineering is inspired by nature to design efficient systems, it may be relevant to analyze what is going on in the head of a psychiatrist (the way he designs and uses his internal model) when he makes his diagnosis to then better train the AI ​​responsible for imitating him… But to what extent are a human “internal model” and that of a program similar?

This is what we asked ourselves in our article recently published in review Frontiers in Psychiatry.

Man-Machine Comparison

By relying on previous studies on the diagnostic reasoning in psychiatry, we established a comparison between the internal model of the psychiatrist and that of RNs. The formulation of a diagnosis goes through three main stages:

  • Information gathering and organization

During his interview with a patient, the psychiatrist assembles a lot of information (from his medical file, his behavior, what is said, etc.), which he then selects according to their relevance. This information can then be associated with pre-existing profiles with similar characteristics.

AI systems do the same: based on the data with which they have been trained, they extract characteristics from their exchange with the patient. features) that they select and organize according to their importance (feature selection). They can then group them into profiles and thus make a diagnosis.

  • The construction of the model

During their medical studies, then throughout their career (clinical practice, reading case reports, etc.), psychiatrists formulate diagnoses of which they know the outcome. This ongoing training reinforces, in their model, the associations between the decisions they make and their consequences.

Here again, AI models are trained in the same way: whether during their initial training or their learning, they constantly reinforce, in their internal model, the relationships between the descriptors extracted from their databases and the diagnostic outcome. These databases can be very large, even containing more cases than a clinician will see in their career.

At the end of the two preceding stages, the internal model of the psychiatrist is ready to be used to take charge of new patients. Various external factors can influence how he will do this, such as his salary or his workload – which find their equivalents in the cost of equipment and the time required to train or use an AI.

As indicated above, it is often tempting to think that the psychiatrist is influenced in his professional practice by a whole set of subjective, fluctuating and uncertain factors: the quality of his training, his emotional state, the morning coffee, etc. And that an AI, being a “machine”, would be rid of all these human vagaries… This is a mistake! Because AI also includes an important part of subjectivity; it is simply less immediately perceptible.

AI, really neutral and objective?

Indeed, all AI was designed by a human engineer. Thus, if one wants to compare the thinking processes of the psychiatrist (and therefore the design and use of their internal model) and those of AI, one must consider the influence of the coder who created it. This has its own internal model, in this case not to associate clinical data and diagnosis but type of AI and problem to be automated. And there too, many technical choices but based on the human factor come into play (which system, which classification algorithm, etc.)

If we want to compare the thought processes of the psychiatrist and those of the AI, we must consider the influence of the coder who created it

The internal model of this coder is necessarily influenced by the same factors as that of the psychiatrist: his experience, the quality of his training, his salary, the working time to write his code, his morning coffee, etc. All will affect the design parameters of the AI ​​and therefore, indirectly, on the decision-making of the AI, that is to say on the diagnoses that it will make.

The other subjectivity that influences the internal model of AIs is that associated with the databases on which it is trained. These databases are indeed designed, collected and annotated by one or more other people having their own subjectivities – subjectivity which will play in the choice of the types of data collected, the material involved, the measure chosen to annotate the database. , etc.

While AIs are presented as objective, they actually reproduce the biases present in the databases on which they are trained.

The limits of AI in psychiatry

It emerges from these comparisons that AI is not exempt from subjective factors and, for this reason in particular, is not yet ready to replace a “real” psychiatrist. The latter has other relational and empathetic qualities to adapt the use of his model to the reality he encounters… something AI is still struggling to do.

The psychiatrist is thus capable of flexibility in the collection of information during his clinical interview, which allows him to access information of very different temporalities: he can for example question the patient on a symptom occurred weeks before or evolve its exchange in real time according to the answers obtained. AIs are currently limited to a pre-established and therefore rigid scheme.

Another strong limit of RNs is their lack of corporeity, a very important factor in psychiatry. Indeed, any clinical situation is based on an encounter between two people – and this encounter involves speech and non-verbal communication: gestures, position of bodies in space, reading of emotions on the face or recognition of non-verbal social signals. explicit… In other words, the physical presence of a psychiatrist constitutes an important part of the patient-caregiver relationship, which itself constitutes an important part of the care.

The comparison between the reasoning of the psychiatrist and that of the AI ​​is nevertheless interesting from a perspective of cross-pedagogy

Any progress of AIs in this area is dependent advances in roboticswhere the internal model of the psychiatrist is already embodied in it.

Does this mean that we should forget the idea of ​​a virtual shrink? The comparison between the reasoning of the psychiatrist and that of the AI ​​is nevertheless interesting from a perspective of cross-pedagogy. Indeed, a good understanding of the way psychiatrists reason will make it possible to better take into account the factors involved in the construction and use of AIs in clinical practice. This comparison also sheds light on the fact that the coder also brings his share of subjectivity to AI algorithms… which are thus not able to keep the promises made to them.

It is only through this kind of analysis that a true interdisciplinary practice, making it possible to hybridize AI and medicine, will be able to develop in the future for the benefit of the greatest number.

Interested in what you just read?

fs1