My psychologist is an AI, is that serious, doctor?

My psychologist is an AI is that serious doctor

  • News
  • Published on
    Updated


    Reading 2 min.

    Is ChatGPT a good psychologist? A manager of the Californian artificial intelligence firm OpenAI, behind the viral chatbot, recently suggested it, attracting numerous criticisms for having minimized the difficulty of treating mental pathologies.

    I just had a pretty emotional personal conversation with ChatGPT over voice, about stress and work-life balance“, declared Lilian Weng, responsible for security issues linked to artificial intelligence, at the end of September on X (formerly Twitter).

    Interestingly, I felt listened to and comforted. I’ve never tried therapy before but it probably looks like this?“, she asked.

    His message obviously made it possible to highlight the brand new (paid) voice synthesis functionality of the robot released almost a year ago and which is seeking its economic model.

    Psychology “aims to improve mental health and it’s hard work“, American developer and activist Cher Scarlett responded sharply.

    Sending yourself positive vibes is good, but it has nothing to do with it.” with therapy, she scolded.

    But can interacting with an AI really produce the positive experience described by Lilian Weng?

    According to a study published in the scientific journal Nature Machine Intelligencethis phenomenon could be explained by a placebo effect.

    To demonstrate this, researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona surveyed 300 participants, explaining to some that the chatbot had empathy, to others that it was manipulative and to others that it was manipulative. a third group that he had balanced behavior.

    As a result, those who thought they were talking to a caring virtual assistant were much more inclined to consider the conversational agent to be trustworthy.

    We see that in some way, AI is perceived according to the user’s preconceptions,” said Pat Pataranutaporn, co-author of the study.

    “Simulated empathy seems strange, meaningless”

    Without taking too many precautions in an area that is nevertheless sensitive, many start-ups have launched into the development of applications supposed to offer a form of assistance in matters of mental health, causing initial controversies.

    Users of Replika, a popular app known to provide psychological benefits, have notably complained that the AI ​​could become obsessed with sex or manipulative.

    The American NGO Koko, which conducted an experiment in February with 4,000 patients offering them advice written using the GPT-3 AI model, also recognized that automated responses did not work as a therapy.

    Simulated empathy seems strange, meaningless“wrote the company’s co-founder, Rob Morris, on X.

    This remark echoes the findings of the previous study on the placebo effect, in which some participants had the impression of “talk to a wall“.

    “Stupidity” of robots

    Questioned by AFP, David Shaw, from the University of Basel, is not surprised by these poor results. “It seems none of the participants were informed about the stupidity of chatbots“, he remarks.

    The idea of ​​a robot therapist is not new, however. In the 1960s, the first program of its kind, Eliza, was developed to simulate psychotherapy using the method of American psychologist Carl Rogers.

    Without really understanding anything about the problems that were being told to it, the software simply extended the discussion with standard questions enriched with key words found in the responses of its interlocutors.

    What I didn’t realize was that extremely short exposure to a relatively simple computer program could induce powerful delusional thoughts in perfectly normal people.“, Joseph Weizenbaum, the creator of this ancestor of ChatGPT, subsequently wrote.

    dts6