Corruption, sexual harassment… When ChatGPT accuses Internet users of imaginary crimes

ChatGPT a bigger revolution than printing by Nicolas Bouzou

This is a very cruel biography that ChatGPT makes of Brian Hood. When you ask OpenAI’s famous chatbot for information about this Australian mayor in charge of the city of Hepburn Shire, he wrongly accuses him of having been involved in a dark corruption affair involving a subsidiary of the Australian Central Bank in the 2000s. Brian Hood did indeed work for this company, but contrary to what ChatGPT says, he has never been sentenced to prison for corruption. It was he, on the contrary, who alerted the authorities to the payment of bribes.

Small consolation for the city councilor, he is far from being the only one to whom the generative AIs attribute imaginary offenses or crimes. When asked about an American law professor, Jonathan Turley, the OpenAI chatbot claimed that the latter had sexually harassed a student during a school trip – which never took place -, citing as a source an article from the washington post – which never existed. The professor, meanwhile, has never been charged with any such offence.

Complaints are growing against ChatGPT

The errors made by generative AIs, when asked about a person or an event, do not always relate to facts as serious as these. But their blunders are no less problematic. French MP Eric Bothorel took up this subject and filed a complaint on April 12 against ChatGPT with the Cnil. If OpenAI’s AI does not accuse him of infamous acts, it peddles a lot of erroneous information about him, giving him an inaccurate date of birth and fabricated professional experiences.

These ramblings of generative AI will be a headache for lawyers. Of course, these cases immediately make one think of public defamation. “But for this offense to be constituted, it must be voluntary,” recalls Sonia Cissé, lawyer specializing in technology and data protection at Linklaters. In the case of ChatGPT and its cousins, it is difficult to see how it would be possible to demonstrate that the AI ​​intentionally defamed a person. And of course, companies specializing in AI protect themselves by systematically attaching texts to their products warning that the accuracy of the information given is not guaranteed.

In the changing world of artificial intelligence, establishing the identity of the person responsible for the error will also be tricky. “Is it the company that created the AI? Or possibly erroneous data sets on which the AI ​​was trained?”, points out Sonia Cissé. Admittedly, the GDPR, which governs the processing of personal data in France, requires companies to check that they are up-to-date and accurate. “The datasets are however so voluminous that it will be impossible to verify each information, it will undoubtedly require a hybrid framework”, analyzes the lawyer of Linklaters. The follow-up given to the first complaints which flourish in the world will be decisive. They will set the rules for the next world.



lep-general-02