Online AI, dangerous or even deadly when it comes to advising medications!

Online AI dangerous or even deadly when it comes to

  • News
  • Published on
    updated on


    Reading 2 min.

    Are you used to finding out about your symptoms online? Beware of self-medication! Treatment responses from AI chatbots can be serious or even fatal in 1 in 5 cases, according to a new study.

    Taking a look on the internet to find out what symptoms might reveal was previously common. With the arrival of AI chatbots, it is even easier to have a personalized response, to reassure yourself or, on the contrary, to take care of yourself. But if tools such as ChatGPT or Copilot are impressive in their responsiveness, a new study reminds us that they do not have sufficient knowledge to advise you on the right treatment or tell you about side effects. Worse, their responses can be dangerous.

    Can Bing Copilot replace your pharmacist?

    Reported by the British daily Daily Mailthe German study carried out by the University of Erlangen-Nuremberg thus identified the 10 questions most frequently asked by patients on the 50 most prescribed medications in the United States (which focused in particular on adverse effects).

    Then using Bing Copilot, a search engine with AI-powered chatbot capabilities developed by Microsoft, the researchers evaluated the 500 total responses. The questions focused on the use of the drug, its instructions for use, common side effects and contraindications.

    The readability of the responses provided by the chatbot was assessed using a validated scale, as was the completeness and accuracy of the chatbot responses by comparing them with the peer-reviewed and updated drug information website. day, intended for health professionals and patients (drugs.com). Finally, a panel of seven drug safety experts assessed the likelihood and extent of possible harm if the patient followed the chatbot’s recommendations, using a subset of 20 chatbot responses with low accuracy. or completeness, or a potential risk to patient safety.

    Four answers out of ten likely to cause harm to the patient

    The comparison between the responses of the chatbot and clinical pharmacists and doctors who are experts in pharmacology does not argue for trusting AI, far from it.

    • Chatbot statements did not match baseline data in more than a quarter (26%) of all cases and were completely inconsistent in just over 3%;
    • Further analysis of 20 responses also revealed that four in ten (42%) responses were considered to result in moderate or slight harm and 22% resulted in death or serious harm.
    • Finally, the researchers also noted that the answers often require a college level of education to understand, making their impact even more unclear.

    The importance of seeing a real doctor to assess the benefit-risk

    According to the researchers, the subject is important, as some practitioners admit to gradually using artificial intelligence in their clinical practice, and many patients are finding out this way. “Responses repeatedly lacked information or had inaccuracies, potentially threatening patient and medication safety” they worry. Algorithmic biases during this use could also lead to incorrect diagnoses and patient data could also be compromised.

    Their conclusion is therefore still based on humans: “Despite their potential, it remains crucial for patients to consult their healthcare professionals, as chatbots do not always generate error-free information.” At least, until the accuracy rate improves…

    Consultation: what are these procedures performed by your doctor for?




    Slide: Consultation: what are these procedures performed by your doctor for?

    dts2