IA Act: the tracks of the European Parliament to regulate artificial intelligences like ChatGPT

IA Act the tracks of the European Parliament to regulate

The rights and duties of machines… The European Parliament is currently working on a draft regulation aimed at regulating artificial intelligence, a text in the pipeline since 2021, when the European Commission had made a first proposal. Use of data, risk of mass production of fakes, delegation of power to not-so-intelligent systems, mass surveillance…

The institution agrees that it is necessary to give a framework to these tools, which are sweeping the market – ChatGPT for the best known, had brought together millions of users in a few days. But European elected representatives still have to agree on the exact terms of this regulation. A new text, presented to Parliament this month by the rapporteurs Dragos Turdarche and Brando Benifei, and revealed by the information site Context, makes it possible to gauge the progress of the work.

It first proposes a definition of these systems. As things stand, they are therefore “trained on vast, large-scale data, designed for generality of results and adaptable to a wide range of tasks”. Correspond to this definition: ChatGPT, but also Bard, text generators. Embedded systems by Tesla, Elon Musk’s autonomous cars, which on the basis of a huge database calculate how to drive. But also systems that assist surgeons in their operations. Less desirable, at least in Europe: social rating systems, biometric identification in the public space…

Agree on terms

In this nomenclature proposed by the European Union, a central point is debated: the development of a classification of the potential dangerousness of these tools. This is the main contribution of the text being written. Certain tools would then be prohibited (facial recognition, social rating), while others, classified as “high risk”, will have to respect a set of rules… which remain to be written.

“AI Act plans to prohibit systems presenting an ‘unacceptable risk’ to human rights, to regulate ‘high risk’ systems and to make ‘limited risk’ systems more transparent”, summed up Katia Rouxadvocacy officer at Amnesty France, on Twitter, March 15.

According to the first draft of the text, AIs classified as “high risk” could then be required to undergo a battery of tests prior to their placing on the market, in order to identify their capacities, their risks, their biases, securing the data used, among others. Until now, making such tools available to the public was mainly at the whim of the developers.

Currently, the most talked about AI, ChatGPT is marketed by Open AI. The American start-up offers a tool capable of generating numerous texts, from poems to legal texts, including business plans. The machine uses the system called GPT-4, whose performance ranks it, for example, in the top 10% of candidates for the American bar.

A “high risk” ChatGPT?

A “high risk” tool? To this question, elected officials must decide, and do not yet know exactly what to do. For France, and the Secretary of State for Digital Jean-Noël Barrot, quoted in The world, ChatGPT and the like should be considered high risk, but it’s unclear exactly what these very recent tools can do. “We are in the process of discovering the problems that these AIs can pose: we have seen that ChatGPT can be used to create very convincing phishing messages (phishing, editor’s note) or even to de-anonymize a database and trace the identity of someone”, underlined last February with AFP Bertrand Pailhès, who heads the new AI cell of the CNIL, the French regulatory authority.

Another point of contention: how to regulate, without slowing down the rise of European AI, when Europe wants to become one of the leaders in this area, and considers that we are at the dawn of a “fourth industrial revolution”? A question, again closely followed by France, while Emmanuel Macron supports the development of these technologies in France. Some worry about the ability of small players to adapt to new rules, when they do not have large legal services.

Many small AI players are rushing to make their tools public, to collect data from curious users and improve their AI… Such a text could push them to abandon the “high risk” AI sector. “While intended to enable safer use of these tools, this proposal would create legal liability for open source models, which would harm their development. This could further focus power on the future of AI in large IT companies and prevent research that is essential to the public’s understanding of AI,” wrote the American think tank. Brookingsof rather centrist persuasion.

If the outlines of the text are still blurred, everyone agrees that these tools will have a strong influence on our lives in the future. “I am deeply disturbed by the potential harm of recent advances in artificial intelligence,” said Volker Türk, UN High Commissioner for Human Rights, calling for the establishment of “safeguards effective” on February 18.



lep-general-02