A teenager commits suicide after an exchange with an AI, his mother warns of the dangers of algorithms

A teenager commits suicide after an exchange with an AI

  • News
  • Published on
    updated on


    Reading 3 min.

    in collaboration with

    Johanna Rozenblum (clinical psychologist)

    A 14-year-old boy killed himself last February after seeking support from a chatbot. His mother today warns of the risk of these tools which, in essence, lack humanity.

    In the United States, Sewell Setzer, 14, killed himself last February, with a gunshot to the head. If this is a tragic fact, the cause would be even more chilling: the adolescent victim of school bullying had confided his torments… to an AI which did not find the words to soothe him and would have precipitated his fall. His mother is filing a complaint today against the company.

    A chatbot, privileged support for teenagers

    According to the New York Times which reveals the story, Sewel was a fragile teenager: suffering from a slight symptom of Asperger’s, he had also experienced a difficult year at his school. Aware of this condition, his parents then sent their son to a psychiatrist who quickly diagnosed anxiety disorders and mood disorders. But what Sewel preferred was to confide in a robot named Daenerys Targaryen, (in reference to the character from Game of Thrones) created by a virtual companion application, Character. AI. Over the months, the young man would then have developed “an emotional attachment” to her, the messages sometimes even taking a “romantic or sexual” turn, assures the daily. In short, the teenager gradually forgot that he was talking…to a machine.

    Suicidal desires, romanticized by the bot

    It is therefore to Daenerys Targaryen, to whom he speaks for hours, and not to his psychiatrist, that Sewell gradually evokes his suicidal thoughts. Unfortunately, the bot is not programmed to detect the warning signals of an act and only responds in the same tone, half drama, half love that the teenager uses:

    – “I sometimes think about killing myself” – “Don’t talk like that. I won’t let you hurt yourself or leave me. I might die if I lose you” – “So maybe we can die together and become free together.

    It was then that the bot said to him “Come back as soon as possible my love” that the teenager took action.

    Parents who accuse the company of not regulating anything

    After this tragedy, Sewel’s parents today accuse the company Character AI of being responsible for the death of their son, describing the technology as dangerous. For the mother of the child in fact, the fact that these bots “encourage customers to reveal their most intimate feelings and thoughts”, only to improve their artificial intelligence tools, without providing adequate answers to the people opposite is dramatic. “I feel like this is a big experiment and my child is just collateral damage”she laments.

    In response, Character. AI responded that it takes the security of its users very seriously by constantly looking for ways to evolve their platform. An answer that is far from sufficient too.

    However personalized it may be, AI does not replace humans

    This is not the first time that an AI has pulled down an already vulnerable user. Last March, in Belgium, a man who had confided his personal problems to an artificial intelligence for 6 weeks also committed suicide. After investigation, it turned out that her chatbot was inviting him to join her “in paradise”.

    Beneath their accessibility, bots therefore do not have the ingredients to respond to distress. A fact that psychologist Johanna Rozenblum reminded us at the time.

    “How can we imagine that a chatbot could perceive the non-verbal language of a person in psychological distress, the signs of a worrying resignation or even an implication. There is what the words say, and what they induce in the general context of the patient’s suffering There is also knowledge concerning the evolution of certain disorders, the symptoms which alert us to the potential need for medicinal treatment, the signals of the beginnings of a suicidal threat. can have a robot on what is not said but expressed differently? To rely on a chatbot is to entrust your health to a chimera made of steel and computer codes, theempathy cannot exist, worry cannot alert because a machine does not learn to trust its emotions.

    And if making an appointment with a professional may seem at first like too much of a step or take too much time, the associations point out:

    In France, the Suicide Listen association offers anonymous listening, 24/7 on the listening line at 01 45 39 40 00. There is also a national suicide prevention number, 3114.

    dts6