A button to stop a runaway AI? “Kill switches” fuel fears and fantasies – L’Express

A button to stop a runaway AI Kill switches fuel

California has annoyed artificial intelligence giants in recent months. His regulatory project had highlighted the idea of kill switcha sort of circuit breaker theoretically capable of stopping an artificial intelligence that would start to go off the rails. The Californian governor finally vetoed it. But the idea of kill switchshe remains alive. It is more than ever in the air as preparations accelerate for the Global Summit for Action on AI to be held in Paris in February 2025.

However, setting up such a system would not be easy. Defining it is a delicate exercise, on which not all stakeholders agree. Before the California bill, large companies in the sector had agreed to sign security guarantees. They thus committed to “not developing or deploying a model if mitigation measures could not be applied to keep the risks below the thresholds”. But without defining these danger thresholds, nor putting in place these famous emergency stop buttons. In other words “functions allowing you to stop or suspend the operation of software when the data becomes too bad or deviates too much from its goal”, summarizes Camille Salinesi, professor of computer science and co-head of the AI ​​Observatory from Panthéon Sorbonne University.

READ ALSO: Generative AI: these “agents” who are electrifying the artificial intelligence sector

These instruments would not be easy to deploy. For Charles Letailleur, senior manager and expert in artificial intelligence within the consulting firm Converteo, the very term kill switch leads to confusion by making people believe that we can install a big red stop button on an AI. “We imagine electrical switches. But artificial intelligence is decentralized, it’s not as if we could unplug a simple socket.”

Shutdown systems could be installed on certain large models, or via their Internet interfaces (ChatGPT, Gemini, etc.) in order to block their use. But since the arrival of ChatGPT in 2022, artificial intelligences have multiplied. Some run locally, sometimes on simple computers, others are only accessible via the cloud. Not counting all the model copies open source slightly modified files circulating on the Internet. Equipping this myriad of tools with kill switch is a challenge. And shut down an entire data center running tools cloud unrealistic: this could have an impact on other models or activities.

Computers that block “unsafe” AI

One option would be to restrict machines rather than software. In the columns of L’Express, AI researcher Stuart Russell raised the idea of ​​computers that would refuse “to run unsafe AI systems” recognizing that this would require “replacing all the computers in the world, develop new types of chips”, representing colossal investments. In the same vein, the RAND Corporation, which advises the American army, suggests to develop tools allowing remote control of the most powerful chips necessary for AI calculations.

READ ALSO: With Donald Trump, the “revenge” of Texan tech on Silicon Valley

Will industry players ever agree to push the button? It is likely that “companies will do everything not to use it,” believes Camille Salinesi. The researcher compares the kill switch to questions of moderation on social networks: many companies argue that it is not up to them to define what is objectionable or not. “As long as a judge does not force them to do so, they will do everything to avoid resorting to kill switch“. Especially since the activities of more and more companies rely on these AI. “This mechanism should therefore be able to turn off only the problematic part, leaving the rest to act”, points out the expert. In the In the case of autonomous cars, controlled by intelligence, it will be essential to know which vehicle is derailing, and not to suddenly stop the entire fleet.

The ability to precisely target faulty users and AI will be decisive. Charles Letailleur therefore recommends equipping small models developed for specific tasks as a priority. More centralized, they are easier to protect. And their drifts are seen more quickly than those of large models.

What legal framework?

These tools also present legal challenges, warns Yaël Cohen-Hadria, associate lawyer at EY. “AIs are used everywhere. Before companies or states can request to activate a kill switch, there will be many conditions to respect.” Not having too strong an impact on the work of companies using these AIs, respect for sovereignty, but also freedom of expression: all of this will have to be weighed before pressing the button.

READ ALSO: “Nobody wants to go back”: coders on the front lines of the AI ​​revolution

It will also be necessary to agree on what constitutes too high a risk, a situation requiring the shutdown of an AI. “We need common criteria, but when I see the difficulties that the European Union has encountered with the AI ​​Act, it seems difficult,” observes the expert.

The large language models, used in many countries, will undoubtedly pose serious headaches. Small specialized models should, here again, be easier to legally regulate. “This would increase confidence in these AIs,” assures Yaël Cohen-Hadria, who is thinking in particular of the fields of health and defense, two fields which require a high degree of reliability. In these industries, where tool reliability is strategic, the presence of a kill switch could even constitute an attractive selling point.

.

lep-general-02