Is ChatGPT about to change our approach to healthcare?

Is ChatGPT about to change our approach to healthcare

  • News
  • Published on
    Updated


    Reading 3 mins.

    In a few months, the arrival of artificial intelligences such as ChatGPT, capable of producing texts and analyzes almost instantaneously, has turned our daily lives upside down. But according to the agency GlobalData, this AI would have just as much power to change care and health, in the short term.

    An intelligence capable of writing health letters concerning you, transmitting your latest results to the right practitioner, or even reassuring you about your symptoms. In this year 2023 this possibility no longer belongs to the future but promises to be imminent. According to a statement from GlobalData, a data and analytics company, on March 24, cutting-edge artificial intelligence has the potential to revolutionize the world of healthcare, and in particular patient care.

    What role can chatGPT play in the care pathway?

    For Tina Deng, senior analyst of medical devices at GlobalData, we can not yet imagine the extent of this tool, which could be of great service: “ChatGPT can be used to help doctors with bureaucratic tasks, such as writing letters to patients, so doctors can spend more time interacting with patients.”

    That’s not all, chatbots, as we are beginning to see in our AI articles, have the potential to increase the efficiency and accuracy of preventive care, symptom identification and post-care processes. -recovery.

    Integrating AI into chatbots and virtual assistants can also interact with patients: review a patient’s symptoms, recommend diagnostic advice, and see make appointments for face-to-face visits with a professional health… This is what AI can do today.

    “But these conversational robot technologies with health AI are already being used successfully” teaches us Professor Fabrice Denis oncologist radiotherapist and president of theInstitute for Smart health.

    In particular, it facilitates remote monitoring of patients with chronic diseases who do not always have access to the Internet or sufficient knowledge to respond to applications. For example, they can call patients every week and ask them about their symptoms, weight, etc. the data is then sent to the healthcare establishment, which can adapt the treatment.”

    Free up consultation time or replace the human: the question that arises

    Professor Denis is formal, for him, these technologies are complementary tools that often improve communication between patients and health professionals but are not intended to replace the human and the link that can be established between a patient and his doctor.

    For GlobalData, this can help reduce the workload of hospital staff, increase patient flow efficiency and reduce healthcare costs while enabling round-the-clock attention, in terms of service. A relief that should directly benefit the consultation.

    AI can also make mistakes: what responsibility then?

    Changing the model, however, raises questions of security, especially when we are talking about an intelligence that is supposed to learn from numerous medical data. A point that the agency GlobalData recognizes in its press release:

    “The use of chatbots in patient care and medical research raises several ethical concerns. As massive patient data is fed into machine learning to improve the accuracy of chatbots, patient information is vulnerable. The information provided by chatbots can be more inaccurate and misleading, depending on the sources introduced to the chatbots”.

    A medical error is therefore always possible, even for ChatGPT. But in this case, who is responsible, in the eyes of the patient and the profession? According to Professor Denis, the technology as powerful as it is, remains only a tool in the hands of doctors, who remain responsible:

    If the initial data is not of good quality or is not up to date with knowledge, the algorithms will give false results. And indeed in case of error, it is the doctor who will have to assume all the responsibility. The doctor always has a duty to inform his patients, and must always be able to justify his decisions. If he applies a decision that the AI ​​has proposed to him and there is a problem with the patient, without him being able to understand the reason for this decision, and therefore without having been able to explain things clearly to his patient , there is indeed a potential forensic issue.”

    Whatever the risks, however, there is no doubt that AI-powered chatbots will increasingly be used in the healthcare industry. Healthcare systems will have no choice but to adapt to this expansion by updating their regulations.

    dts1