With the emergence of AI, many are now turning to chatbots to find answers to their questions. But several recent alerts prove that ChatGPT should not be trusted when it comes to health!

With the emergence of AI many are now turning to

With the emergence of AI, many are now turning to chatbots to find answers to their questions. But several recent alerts prove that ChatGPT should not be trusted when it comes to health!

A little cold that has been dragging on for a few days? A strange button on the chest? No worries, ask ChatGPT! Because if we already tended to ask Google for the answer to all our problems and questions, the popularization of generative artificial intelligences has only reinforced the phenomenon. At the same time, they strongly inspire confidence, with their natural language, their absolutely huge database and their unfailing aplomb! They always have answers to everything! But that does not prevent them from telling anything. Not to mention that clever little guys have fun selling books written by ChatGPT on e-commerce platforms. And if some errors are benign, others can have much more fatal consequences…

ChatGPT and health: the dangerous combo

Researchers at Brigham and Women’s Hospital, a facility affiliated with Harvard University, have published a study in the journal JAMA Oncology revealing how ChatGPT can provide totally wrong answers on health questions. They asked him various questions relating to the treatment of cancer and it turned out that 34.3% of the answers provided by the language model did not correspond to the recommendations of the main cancer centers in the country. Worse still, 13 of the 104 answers were totally made up. And it is particularly dangerous on such a subject… Danielle Bitterman, co-author of the study, explains that “ChatGPT’s responses can be very human-like and quite compelling. But when it comes to making clinical decisions, there are so many intricacies to consider in each patient’s unique situation. A good answer can be very nuanced, and it’s not necessarily something ChatGPT or another language model can provide.”

This is not the only study to highlight this problem. A little earlier, researchers from the CHU Sainte-Justine and the Montreal Children’s Hospital questioned, as part ofa study, ChatGPT in order to ask him twenty medical questions relating to twenty recently published scientific articles. They then asked him to answer them, providing references. Result: the chatbot made five major factual errors in its answers and invented 70% of the references provided. After all, not everything is catastrophic, and it has been shown that ChatGPT could be an excellent help in the medical field. For example, using GPT-3 – the previous language model on which the chatbot is based – it was able to identify in a person’s writing the early stages of Alzheimer’s disease, with an efficiency of 80% . Also, AI can be effective for professionals, to help them think, to consider avenues, but it should certainly not be taken for granted. Nothing replaces the word of a health professional!

Books written by AI: sometimes deadly advice

But if we don’t trust ChatGPT completely, we will sometimes seek advice from sources that we think are reliable: books. However, more and more books sold on Amazon and other online booksellers are actually written by artificial intelligence. Recently, scammers took advantage of the summer holidays to sell fake travel guides written by AI on Jeff Bezos’ platform (see our article). But, in addition to the unpleasant surprises and the unpleasant feeling of having been fooled, these books can prove to be dangerous for your health.

The Mycological Society of New York and 404 Media sound the alarm. As fall approaches, many mushroom enthusiasts go picking. But with around 5,000 species of mushrooms on French soil, it is difficult to differentiate what can be eaten from what is dangerous to health. Also, many people turn to guides on Amazon… which are sometimes written by AIs, without this being mentioned. Some “authors” will even use image-generating AIs for their author photo, in order to be more credible. This is the case of Wild Mushroom Cookbook: a beginner’s guide to learning the basics of cooking with wild mushrooms with complete, easy-to-follow, healthy and delicious recipes!, available on Amazon. The authors of this type of work all have similar profiles: dozens of titles – mainly practical guides on food and nutrition – published in the last two months, a vague author’s page, citing training in nutrition and research experiences, but without specifying in which institutions they worked or obtained degrees. Some, more daring, give impressive references but, when you dig a little, you quickly realize that they are false.

The problem is that the AI ​​is unable to discern the differences between an edible mushroom and a poisonous mushroom, which can sometimes be very subtle due to the resemblance between certain specimens. Sigrid Jakob, president of the Mycological Society of New York, explains that poisonous mushrooms “may look like popular edible species. A bad description in a book can trick someone into eating a poisonous mushroom.” Also, the organization calls for caution: “Please only buy books from well-known authors and collectors, it’s literally a matter of life and death”. Amazon has already withdrawn some books from sale. The platform is doing its best to detect fake or dangerous books, but it has to deal with the large number of books involving AI published in recent months. Difficult then for her to sort out in due time.

39485384

Morality: you should never trust ChatGPT! There are applications to know the variety of mushrooms by taking the plant directly in photo, but the best thing is to seek the advice of a pharmacist or a mycologist. The Mycological Society of France, specialized in the study of mushrooms in France, also offers on its site a list of mushrooms poisonous and edible.

ccn5