Published on
Updated
Reading 3 mins.
A new study conducted by Harvard indicates that the automated analysis of our language and our way of speaking would make it possible to diagnose depression, psychosis, and even predict their appearance before the first signs.
The way we express ourselves, how we communicate, always says a lot about us. But speech analysis could indicate much more than a personality trait or insecurity. According to a new study published in the Harvard Review of Psychiatry, psychiatric disorders such as severe depression and psychosis would also show up in a person’s speech and language patterns. So much so that, according to their analysis, it is possible to diagnose these disorders, measure their severity and even predict their appearance thanks to the flow and the phrasing used.
Depression, suicidal thoughts, mental illness… Automated speech analysis detects them
Based on previously published studies of speech analysis in relation to psychiatric disorders, the researchers identified four key areas of application: diagnostic classification, severity assessment, prediction of onset, prognosis and results of treatment.
According to the study, features of mental illness are often presented through speech and language and are typically found in the speed, consistency and content of a patient’s speech. In addition, the study teaches us that automated analysis is much more precise on this point than traditional interviews, which are too subjective to obtain effective results.
Thus their analysis of articles on the subject reaches several conclusions:
- Speech that is slow, full of pauses, negative content, and lacking in energy could lead to a diagnosis of major depression. In these, diagnostic accuracy was high, over 80% in one study;
- Erratic frequency, hesitation, and nervousness, on the other hand, would identify patients with suicidal ideation over healthy patients 73% of the time;
- Automated analysis is also effective in predicting the onset of mental illness, especially in high-risk populations. Several studies of speech semantics, including coherence and complexity, have predicted the onset of psychosis in 2 to 2.5 years with up to 100% accuracy.
An analysis that has its limits
While automated analysis of the rate, coherence or content of speech can arrive at a prognosis, the team cautions that this processing is limited. There are indeed many factors, such as the effects of medications, as well as demographic and cultural attributes (language, region, gender) that can lead to variations in speech patterns and make it difficult to objectively assess disease.
Furthermore, the authors suggest that any further research should consider disease states over time, as most of the studies reviewed here involved currently ill patients. The persistence of schemas was not studied here. Thus, further research is needed.
Blood alcohol can also be detected in the voice
According to another Australian research, a new AI technology can instantly determine if a person is over the legal alcohol limit by analyzing a 12-second snippet of their voice. The algorithm that makes this possible was developed and tested using a dataset of 12,360 audio clips from drunk and sober speakers. According to the first tests, the algorithm was able to identify intoxicated speakers – with a blood alcohol level of 0.5 g/l – with an accuracy of nearly 70% and with a performance greater than 76%, in the identification of intoxicated speakers with a blood alcohol level above 1.2 g/l. Less expensive and simpler than established breath-based alcohol tests, the lead author announces that their invention could be integrated into mobile applications and used “as a preliminary tool to identify intoxicated persons”.