Starting today, minors will no longer have access to ChatGPT without adult consent, as OpenAI has revised its AI terms of use following its ban in Italy. It remains to be seen how to apply the ban…

Starting today minors will no longer have access to ChatGPT

Starting today, minors will no longer have access to ChatGPT without adult consent, as OpenAI has revised its AI terms of use following its ban in Italy. It remains to be seen how to apply the ban…

ChatGPT continues to make headlines. Since this chatbot – conversational robot – has been made available to the general public and since all the major tech players have embarked on a mad race for artificial intelligence, not a day goes by without us heard of a new AI-based system.

But everything goes very quickly. Too fast. Much too fast. Because AI poses many ethical problems and the abuses of its use have already been pointed out. To the point that researchers and digital personalities have recently warned of the dangers of this technology, asking, in an open letter, for a moratorium on large-scale experiments, time to think about the future. Some governments have opted for a more drastic approach, like Italy which decided last week to ban “immediate effect” access to ChatGPT, on the grounds of a lack of respect for user privacy, breaches of the General Data Protection Regulation (GDPR) and the lack of protection for minors (see our article).

Other countries immediately followed suit, forcing OpenAI, the start-up behind the chatbot, to take steps to rectify course. In a blog post published this Wednesday, April 5, the company presents its “AI security approach” and explains how it intends to strengthen the security of its users. And for starters, only users over 18 or over 13 with parental consent can now chat with ChatGPT.

ChatGPT bans minors: a response to the ban in Italy

This age restriction responds to one of the main criticisms formulated by the Italian authorities, who had pointed out that ChatGPT did not verify the age of users, which “exposes minors to responses that are absolutely unsuited to their level of development and awareness”especially since the company indicated in its legal notices that it only addresses users who are at least 13 years old. “Protecting children is one of our safety priorities. We require people to be 18 or older—or 13 or older with parental consent—to use our security tools. artificial intelligence, and we are exploring verification options “, decides OpenAI. It soon intends to introduce a tool to verify the age of users. We have no idea yet as to what form it will take. hoping that it is more than a simple checkbox, without being too intrusive in our privacy.

The company would like to remind you that it takes the time to carry out rigorous tests and to put safeguards before making its technology public. “For example, once our latest model, GPT-4, completed its training, we spent over six months working across the organization to make it more secure and better aligned before releasing it to the public” she pleads. “We are carefully and gradually releasing new AI systems – with significant safeguards in place – to a growing group of people, and making constant improvements based on the lessons we learn. .” The fact remains that ChatGPT-4 still continues to hallucinate… OpenAI ensures to deploy “considerable efforts to minimize the risk that our models generate content harmful to children”claiming to have succeeded in making GPT-4 82% less likely to respond to requests for banned content than GPT-3.5. Additionally, the chatbot blocks all child pornography content and reports the user to the National Center for Missing and Exploited Children.

ChatGPT: convictions hanging in its face

OpenAI’s decision to ban ChatGPT to those under 18 is hardly surprising since the company, following its conviction, had 20 days to respond to the concerns of the Italian authorities, or face a fine of up to 20 million euros or 4% of its annual turnover. Not to mention that this condemnation paved the way for other countries, especially in Europe, where the GDPR is also in force in 27 states. In France, two complaints have been filed with the CNIL, the French policeman of personal data, as revealed by the site The Informed. The first complaint was filed by lawyer Zoé Vilain, president of the association for raising awareness of digital issues Janus International, who criticizes the chatbot for not having presented her with general conditions of use in the registration stages for become a ChatGPT user and not own a “any privacy policy“.

The second complaint comes from a developer named David Libeau, who spotted personal data about him by querying ChatGPT. “When I asked for more information, the algorithm started to fabricate and attribute to me the creation of websites or the organization of online demonstrations, which is totally false”, he explains. A finding that is hardly surprising, this type of chatbot being known to “hallucinate”, that is to say, to invent facts with confidence. According to David Libeau, this contravenes Article 5 of the GDPR, according to which information about individuals must absolutely be accurate. Yet, OpenAI asserts in its post that “we therefore strive to remove personal information from all training data where possible, to refine models to reject requests for personal information from private individuals and to respond to requests from individuals wishing to delete their information of our systems. These measures minimize the possibility that our models generate responses containing personal information about private individuals..” For its part, Canada also opened an April 4 investigation into the company for using and sharing private data without prior consent. Despite the ban on ChatGPT access for users under the age of 18 years old, the problems caused by the chatbot are far from all solved…

ccn3