Regulation of AI: this other possible path, by Frédéric Filloux

Regulation of AI this other possible path by Frederic Filloux

The good news in the anarchic proliferation of new forms of artificial intelligence (AI) is the existence of a consensus on their dangers. This was not the case for social networks for which it took ten years of toxic exploitation to react.

During its hearing Before the US Congress last week, Sam Altman, co-creator and CEO of OpenAI – whose ChatGPT has rallied 100 million users in a few months – explicitly called for regulation of the sector. For him, the solution lies in the creation of a federal agency which would grant licenses to companies capable of producing the most powerful AIs. Faced with the same senators, academic Gary Marcus, one of the most critical of these proliferating AIs, believes that they should be managed by an agency with global skills such as the IAEA (International Atomic Energy Agency) which is responsible for nuclear research.

None of this really holds up. An International Artificial Intelligence Agency? For the atom, it took ten years between the first UN project, a year after the bombing of Hiroshima, and the adoption of its statutes in 1956, an unthinkable delay at the rate of development of artificial intelligence. . Moreover, unlike building an atomic bomb, which is a tremendously difficult undertaking, launching AIs is simple: today, thousands of variants are active.

The tech sector has anyway shown that its players, at the individual level, have never been able to self-regulate. Meta has never sought to contain the toxicity of Facebook and Instagram, while Google has maintained complete opacity over data collection.

Faced with this, governments are acting in a classic way: incantations for the Biden administration, regulations unrelated to reality and the tempo of the moment for the European Union, whose AI Act still has the merit of existing.

States or the EU do not have the resources

Eric Schmidt, former CEO of Google (2001-2011) is rather alarmist on the advent of large language models that form the substrate of AI. For him, the latter learn far too quickly, they are dangerous when they are untrained – before their creators subdue them – and, above all, their so-called “emerging” behaviors, that is to say which escape to their creators, are frightening. He does not think that the administrations are competent: “When this technology will spread, the situation will worsen. And I much prefer that it is companies which agree to define a reasonable framework”. A government cannot understand what is happening – everything is too new, too complex, too fast. For Schmidt, the States or the EU simply do not have the financial and technical resources nor the agility to deal with the situation. His detractors like Emily Bella professor at Columbia, argue that this amounts to asking the fox to watch over the chicken coop.

One solution, on the other hand, would be for governments to impose a collective responsibility approach on the major AI players. Here is a possible scheme: these major players in AI, in the United States and in Europe, would finance a non-profit entity composed of high-level engineers (paid at the market rate and not by the administration), supplemented by lawyers and public policy specialists. Amount of the envelope: 40 to 50 million dollars – this is the annual cost of 80 engineers and data scientists, about thirty lawyers and public policy specialists. An affordable sum: when Facebook set up its Oversight Board in 2018, it endowed it with 130 million per year, the equivalent of three times the budget of Arcom.

Independent governance would be an absolute imperative. Nothing very complicated from a legal point of view. In the Anglo-Saxon world, countless structures are controlled by a Board of Trusteeswith an executive committee reporting only to him without outside interference.

The mission of this “AI Global Safety Board” would be to dissect all the versions of the most powerful AIs in their final development phase and subject them to a battery of stress tests to detect abnormal behavior. In doing so, it would outline the technical-legal framework on which countries could rely to define their regulations.



lep-sports-01