Artificial intelligence: the AI ​​Act, Europe’s gas factory

GPT 4 will the new AI be communist or capitalist By

Is it to regulate the construction of cars, planes or algorithms? We can ask ourselves the question when reading the European bill intended to regulate artificial intelligence, as it seems to be of classic industrial inspiration. In this 140-page text called “AI Act“, it’s about “registration”, “compliance”, “notified bodies”, “assessment process”, and obviously (heavy) sanctions… Not a button of legal gaiter is missing in this long legislative flow. For its main author, European Commissioner Thierry Breton, who is never stingy with hyperbole, “it is an exploit“to have laid such a bill in two years – a geological time on the scale of innovation.

The first wails of the European Council on AI date back to 2017 with the expression of a “sense of urgency”. Two years later, he issued his conclusions with a view to an AI made in Europe, accompanied by a series of precautions in the face of the emergence of so-called “high-risk” artificial intelligences (medicine, security, infrastructure management, etc.). In 2020, the European Parliament is beginning to translate these concerns into resolutions which lead, in 2022, to the text which will be voted on this spring by Parliament. In real life, that of technological innovation, the chronology is this: 2017, publication of the first description by Google engineers of the so-called transformative model which will serve as the basis for generative AI; a year later, OpenAI launches GPT-1, followed every year by a version ten times more powerful than the previous one; in 2021, Google DeepMind publishes AlphaFold, which makes it possible to understand the structure of proteins, thus paving the way for the creation of new drugs. At the end of 2022, Google and Microsoft declare an all-out war on “big language models” (LLM), while tens of billions irrigate new companies.

“While its creators wanted an avant-garde text, capable of demonstrating European technological leadership in the face of the Americans, it is now outdated”, judge Gérald Sadde, lawyer for the firm Shift specializing in these issues and who wonders if the EU does not seek to regulate something that is not necessarily regulable.

One of the foundations of the text is the classification of AIs according to their degree of risk. “Already, this presupposes single-tasking AI, whereas today we are entering the era of generalist AI with ChatGPT, for example”, notes the lawyer again. However, the characteristic of so-called generative artificial intelligences is precisely the extent of their field of application: ChatGPT can just as easily create computer code for a service intended for children as be diverted to debit an avalanche of infox likely to influence an election.

Like a Seveso factory

Lawyers specializing in tech also note that the legal arsenal is already well equipped to sanction the excesses of AI: endangering others, violation of privacy, breach of trust, plagiarism, parasitism, etc. Admittedly, these legal tools are only applicable once the infringement has been established. Hence the objective of the European text, which aims to place itself upstream of AI developments, with a series of prior obligations. But these seem designed for the construction of Seveso-classified factories: precise definition of the scope of an AI, registration, documentation, traceability, governance, approval audit, “CE” label (for European conformity)…

The collision between this text and reality is brutal. In practice, the AI ​​Act will be complicated to implement. Take for example traceability, this idea according to which we will seek to understand how an artificial intelligence manages to suggest an action, or even to make an effective decision. Eric Schmidt, former boss of Google, believes that this amounts to observing the neurons of a brain under a microscope to detect which ones are activated in reasoning. It also happens that the orders of magnitude are comparable: 175 billion parameters for GPT-3, while the human brain has 86 billion neurons (but 1,000 times more connections between cells). It is therefore difficult to hope for complete traceability as is done for the manufacture of a lasagna tray.

The part that most worries professionals is the need to have to audit AIs that will be categorized as high risk. According to the text, this task will be delegated to independent third parties (the “notified bodies”, in the European sabir). “These will be consulting firms,” ​​said Paul Pinault, head of strategy at Braincube, which analyzes large industrial production systems by making extensive use of AI.

The irruption of consultants in the AI ​​Act

The irruption in this ultra-scalable ecosystem of a swarm of consultants, not necessarily more competent than the authors of the algorithms that they will be responsible for inspecting, is not very enthusiastic. Not to mention the risks of leakage: lawyers and AI operators interviewed fear having to reveal essential aspects of their intellectual property.

However sincere it may be, the intention is again modeled on classic industrial schemes: auditing a toxic effluent treatment chain, to take an example, has nothing to do with the analysis of an AI in constant modification, which in some cases alters its morphology according to its interactions with users – this is the notion of “reinforcement by human interaction” (RLHF), which is not even mentioned in the AI ​​Act, which is very light in technical specifications. These audits will not make it possible to anticipate the evolution cycle of an AI, believe several professionals. “Between their cost and their heaviness, these processes risk encouraging entrepreneurs to go and create AI outside of Europe”, concludes Paul Pinault, of Braincube.

Echoing his words, an unofficial note from the US government, and revealed by the site Euroactiv, considers that the text imposes excessive responsibility on the creators of these AIs, to the point that its implementation will be “technically difficult and in some cases impossible”. Washington also believes that the conditions for inspecting the source code “should be better defined, with more precise award criteria to avoid subjective interpretations, and provide for the possibility of appeal”. For the American executive, “the European Commission has constantly closed the door to non-European standards, while the United States is pushing for greater bilateral cooperation”, but, criticizes Thierry Breton, with well defined definitions too wide.

The only certainty, out of pride or ideology, legislative Europe has already missed the opportunity for cooperation between the two great Western technological powers on the most structuring subject of the coming years.

lep-general-02