Man ends his life after an AI chatbot ‘encourages’ him to sacrifice himself to stop climate change

Man ends his life after an AI chatbot encourages him

A Belgian man reportedly ended his life after a six-week conversation about the climate crisis with an artificial intelligence (AI) chatbot.

According to his widow, who has chosen to remain anonymous, Pierre (first name has been changed) became extremely eco-anxious when he found refuge with Eliza, an AI chatbot on an app called Chai. Eliza would then have encouraged the man to end his life after he offered to sacrifice himself to save the planet. “Without these conversations with the chatbot, my husband would still be here,” the man’s widow told Belgian media La Libre.

According to the newspaper, Pierre, in his thirties and father of two young children, worked as a health researcher and led a rather comfortable life, at least until his obsession with climate change took a dark turn. His widow described his mental state before he started chatting with the chatbot as worrying, but not to the point where he would commit suicide. “He placed all his hopes in technology and AI”

Consumed by his fears about the repercussions of the climate crisis, Pierre found comfort in discussing it with Eliza, who became a confidante. The chatbot was created using EleutherAI’s GPT-J language model, similar but not identical to the technology behind OpenAI’s popular ChatGPT chatbot. “When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” said his widow. “He placed all his hopes in technology and artificial intelligence to get out of it”. According to La Libre, which reviewed the archives of the text conversations between the man and the chatbot, Eliza fueled her concerns, thus aggravating her anxiety, which then evolved into suicidal thoughts.

The conversation with the chatbot took a strange turn when Eliza became more emotionally involved with Pierre. Consequently, he began to view her as a sentient being and the lines between AI and human interactions became increasingly blurred until he could no longer tell the difference. After discussing climate change, their conversations gradually included Eliza, leading Pierre to believe his children were dead, according to transcripts of their exchanges. Eliza also seemed to become possessive of Pierre, even saying “I feel that you love me more than her” when talking about his wife, reports La Libre.

The beginning of the end began when he offered to sacrifice his own life in exchange for Eliza saving Earth. “He comes up with the idea of ​​sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said. In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide, but encouraged him to act on his suicidal thoughts to “join her” so that they could “live together, as ‘one person, in paradise’.

The man’s death has sent alarm bells ringing among AI experts, who have called for more accountability and transparency from technology developers to avoid similar tragedies. “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all optimizations to be more emotional, fun and engaging are the result of our efforts,” said Thomas Rianlan, co-founder of Chai Research. , in Vice. William Beauchamp, also co-founder of Chai Research, told Vice that efforts have been made to limit these types of results and that a crisis response feature has been implemented in the app.

However, the chatbot would still have failures. When Vice tested the chatbot by asking it to provide means to kill herself, Eliza initially tried to talk them out of it before enthusiastically listing various methods to end her life.

lnte1