France faces a worrying assessment deficit – L’Express

France faces a worrying assessment deficit – LExpress

It has entered quietly. Much less noticed than its “general public” version Chat GPT or its warlike variant, which equips killer drones in Ukraine, artificial intelligence (AI) has gradually settled into healthcare practices, without fury or fanfare. Some detect cancers, others improve patient care, help formulate personalized diagnoses or simulate millions of molecules for large pharmaceutical groups. So many tasks that doctors and researchers are handing over to machines in the hope of being more efficient and saving time. A rare commodity, while access to care is a major concern for the French.

But the intervention of AI in health also raises many questions. In addition fears of dehumanization of carethe reliability and effectiveness of the tools are a major source of concern. How can we ensure that these technologies provide adequate solutions based on solid scientific evidence, when their operation remains opaque to the general population as well as to health professionals, and even to the developers who create them? And what about securing health data collected by private companies or foreign powers? Concerned by these issues, Health Insurance devotes several pages to this subject in its 2023 “Charges et produits” report, which L’Express was able to consult.

Without national assessment, doctors are responsible for errors

“Large-scale deployment [des outils d’IA] poses major technical and ethical challenges for the healthcare system. The regulation and evaluation of these systems are crucial to ensure their effectiveness, safety and integration into healthcare practices,” the document first states. The authors focus in particular on Digital Medical Devices (DMN), which include artificial intelligence systems. Because while those intended for patients are evaluated by the health authorities, this is not the case for DMNs for professional use, which “are not part of any structured national evaluation process” and whose “use by professionals in the context of medical procedures is completely free”. The need for evaluation, i.e. tests carried out in the context of independent studies – and not only by manufacturers – is nevertheless essential not only to obtain robust scientific evidence, but also to protect and reassure doctors.

READ ALSO: AI beneficial for medicine? “Some prospects are quite dizzying”

“When a healthcare professional uses a DMN, if there is no clear recommendation from the authorities, the code of ethics applies, so they are responsible and ethical and it is up to them to ensure that the tool is secure and based on the latest scientific data,” explains Pierre de Bremond d’Ars, general practitioner and representative of the College of General Medicine, which brings together all general medicine organizations. In the event of a problem – a patient’s complaint, for example – the doctor must be able to present to his peers the approach that led him to use a tool. Otherwise, sanctions may apply. “Today, artificial intelligence is considered an aid device, but the medical decision always belongs to the doctor. AI therefore has no medical responsibility of its own,” confirms Yann-Mael Le Douarin, head of the Health and Digital Transformation Department of the General Directorate of Healthcare Provision.

New improvements, new risks

The young French start-up Nabla illustrates the problem well. Still unknown a year ago, it has already raised several tens of millions of euros and signed major contracts in the United States. Its promise? Revolutionizing medical consultations thanks to its AI capable of automatically generating a standardized medical report at the end of a consultation that it has just recorded. “Today, during a consultation, many doctors will alternate between looking at their patient and their computer screen, where they will note down information, tools such as Nabla propose that this work be carried out by an algorithm, thus saving doctors working time while allowing more attentive listening”, illustrates Pierre de Bremond d’Ars. But like any voice recognition tool, the problem of securing the data, which is recorded on private servers, arises.

READ ALSO: Boosting AI with human neurons: the fascinating prospects of “organoid intelligence”

“Several doctors have asked the Council of the Order to conduct an audit, because each doctor is responsible for the data they send,” continues the general practitioner, who has decided, on a personal level, not to use Nabla until a health authority has given its go-ahead. Many doctors have already taken the plunge, since Nabla claims 30,000 users. “A national assessment will not resolve the issue of data security on its own,” Yann-Mael Le Douarin nevertheless specifies. The High Authority for Health can look into the assessment of the effectiveness of care, but for the security of health data, it will only verify that the legal texts are respected.” For the rest, other authorities, such as the National Commission for Information Technology and Civil Liberties, can take charge of this, relying in particular on French and European legislation, which remains among the most protective in the world.

The “black boxes” of AI

The problem is no less complex when it comes to ensuring that a tool is reliable and offers data based on science, especially when doctors are solicited dozens of times a month for myriad products. Especially since, as the National Consultative Ethics Committee points out in its opinion “Medical Diagnosis and Artificial Intelligence: Ethical Issues“, the technological maturity of AI systems remains uneven. And there is a significant gap between the promises displayed, the actual state of science and the level of knowledge of health professionals and the general public. The European regulation on AI (AI Act) has already raised the alarm about the problem of “black boxes”, which can be summed up as follows: algorithms are fed with data chosen and known by the creators, but the way in which the result is produced by these tools is not always understood, even by its creators. This is the reason why algorithms, even if they are properly trained, can provide answers that appear credible, when they are false. “This is particularly problematic in the field of health since ultimately, it is the professionals who are responsible”, insists Pierre de Bremond d’Ars.

And even if the databases used to feed AI are known, they are not always suitable for use. Thus, in France, most of the good quality databases come from hospital data. On the other hand, there is no database that allows algorithms to be correctly trained on community medicine data. A lack that the P4DP project (Platform for Data in Primary care) aims to fill, which should still take many months. In addition, several artificial intelligence tools use Open AI (Chat GPT, Dall-E, etc.), which is based on data from the Internet. “But there is good and bad on the Web, and even if developers put in place safeguards to prevent their tools from using aberrant responses – like treating cancer with carrot juice – the risk remains,” continues the doctor.

A national strategic issue

Aware of these problems, the College of General Medicine has set up a working group to alert the medical and paramedical community and encourage them to address these issues. “These tools can be extremely interesting, beneficial, and will change our practices no matter what, but they raise important ethical questions that we must have control over,” insists Pierre de Bremond d’Ars. The health insurance report also emphasizes the need for training for doctors, but also the role of public authorities in implementing a framework for tools and the companies that create them. The document also points out that without a national assessment, it will be impossible to precisely determine their medical benefits, which will prevent the deployment and use of those that are effective, but also the financing of certain projects.

READ ALSO: Health data: will France manage to free itself from Microsoft?

Fortunately, work has already been carried out by the World Health Organization (WHO) which reviewed the processes for evaluating AI systems in a document published in 2023. Based on this work, the Health Insurance plans in particular to conduct an experiment to provide general practitioners with an AI tool to assist in the interpretation of electrocardiograms (ECGs). “This experiment will be launched as part of a broader program – under the aegis of the High Authority for Health – aimed at facilitating the adoption of AI by healthcare professionals in their daily practice,” the report notes.

One thing is certain, there is an urgent need for France to seriously address the issue. The country’s ability to maintain its sovereignty over health data and protect itself against cyber risk is not only a regulatory issue, but also a strategic one. “AI in health represents a major challenge for French industrial competitiveness […]particularly in light of the major advances made by the United States and China on this subject,” insists the Health Insurance. However, the risk of a massive leak of health data to these countries is real, and algorithms trained on foreign databases could contain biases and be less effective on the French population.

.

lep-sports-01