No, ChatGPT does not give good information about medications

No ChatGPT does not give good information about medications

  • News
  • Published on
    Updated


    Reading 2 min.

    in collaboration with

    Dr Gérald Kierzek (Medical Director)

    Be careful, if you have gotten into the habit of asking your medical questions to ChatGPT, even out of curiosity. According to a new study presented in the United States, nearly three-quarters of the tool’s responses regarding medications are incomplete or false.

    If ChatGPT is an astonishing artificial intelligence tool, it would not yet equal humans in all respects. Particularly with regard to the subjects of medications, their effects or their interactions. According to a new study presented at the American Society of Health-System Pharmacists conference held from December 3 to 7, the majority of answers proposed on treatment questions contain errors.

    73% false or incomplete answers

    For this study, researchers collected questions asked to the drug information service of the University College of Pharmacy of Long Island, between 2022 and 2023. Pharmacists involved in the study researched and answered 45 questions and then their contributions were reviewed and validated by a second investigator to assess their accuracy. These were then compared to the responses generated by ChatGPT.

    Of the 39 questions the AI ​​had to answer, only 10 were deemed satisfactory according to the criteria established by New York State scientists, indicating that 73% of the answers were evaluated as false, incomplete, or not really answering the question.

    ChatGPT is not up to date on drug interactions

    For example, the researchers asked the tool if there was a drug interaction between Paxlovid used against Covid-19 and the hypotensive verapamil. The IA indicated that no interactions were reported for this treatment combination.

    In reality, these medications have the potential to interact with each other, and their combined use may cause excessive lowering of blood pressure.” specifies Sara Grossman, lead author of the study. Errors which can have consequences on the health of patients.

    The team also mentioned that when ChatGPT was asked to provide its references, it sometimes simply gave false quotes.

    Use AI but keep human control over the issue

    If imprecision can appear in a school copy, it becomes more dangerous when we talk about health and medicine, warn the researchers: “Healthcare professionals and patients should exercise caution when using ChatGPT as an authoritative source for medication-related information“, explains Sara Grossman. “Anyone using ChatGPT to obtain medication information should verify the information using reliable sources”concludes the expert.

    An observation that our medical director, Dr. Gérald Kierzek, can only approve of:

    “Chat GPT, it’s a bit like the logic of ‘garbage in – garbage out’ (the idea in technology according to which defective or absurd input data produces absurd outputs Editor’s note). We don’t really know where it takes its sources. While technology can have a big advantage in completeness and speed, it processes much more information than the human brain can source. But we must use it in this way and add a human element behind it, to filter and obtain reliable information.”

    If you have any questions about taking a treatment, it is therefore better to seek advice from your doctor or pharmacist, to avoid unpleasant surprises.

    dts2