Soon an emergency button to deactivate dangerous AI

Soon an emergency button to deactivate dangerous AI

Governments and AI companies have come together to develop the technology responsibly. They undertake to put in place a button to instantly deactivate AI that is too dangerous.

More than ever, with the exponential boom in artificial intelligence, the debate regarding the security of such technology is raging. Are we moving too fast in the development of AI? Is it a tool that humans can benefit from or is it dangerous? Do we risk, by pushing it ever further, to obtain a chatbot like I, Robot ? Also, to identify and contain the excesses linked to the deployment of artificial intelligence in our daily lives, the European Union, the governments of ten countries and sixteen large companies specializing in artificial intelligence (Amazon, Microsoft, Google, OpenAI, Samsung, Mistral AI, Meta, Anthropic, etc.) came together at an AI summit in Seoul, the Frontier AI Safety Commitments. The objective: to define guidelines for responsible development of this technology, including providing for a “Terminator scenario“, where AIs would turn against their creators and users, as reported by the CNBC. An initiative that reflects the growing concern about the potential risks associated with AI.

AI Summit: kill switches to stop everything

In this new agreement, the signatory companies have made provisions on the detection of threats and protective measures against societal risks. They also agreed to publish security frameworks outlining how they will measure challenges posed by their models, such as preventing misuse of the technology by malicious actors. If this threshold, this “red line”, is crossed, the risks incurred will be considered intolerable.

Among the measures mentioned, there is the desire to put in place “kill switches”, a process which would make it possible to easily and quickly deactivate the activity of any AI. The importance of having some kind of emergency button to deactivate AIs is not without meaning, because the companies involved like OpenAI themselves admit that they do not know how far their tools can go and that they include risks.

Sam Altman’s company, for example, hit the Web with the rumor of an AI called Q*, an AGI, (for Artificial General Intelligence) functioning in a similar way to a human brain, which would allow it, in theory less, to perform the same tasks. We are talking about an AI that would be capable of learning and understanding more and more things (see our article). “AGI would also carry a serious risk of misuse, serious accidents and societal disruption,” concedes Sam Altman, who now sits at the head of the new security committee – there is a bit of a conflict of interest…

Additionally, having an immediate shutdown feature provides an added level of protection, ensuring that any situation can be quickly controlled and mitigated. “These commitments ensure that the world’s leading AI companies will be transparent and accountable for their plans to develop safe AI.”said Rishi Sunak, the Prime Minister of the United Kingdom, in a press release.

AI Summit: Ineffective commitments?

It is difficult to know whether this policy will really be effective, given that it does not define specific risk thresholds. Additionally, other AI companies not present will not be subject to the pledge. And, above all, the commitment is not binding. It remains to be seen what this will yield in terms of concrete commitments, in the short and long term.

Following the summit, a group of participants wrote an open letter criticizing the lack of formal regulation of the forum and the leading role played by AI companies in setting regulations in their own industries . “Experience has shown that the best way to tackle these problems is through enforceable regulatory mandates, not self-regulatory or voluntary measures”, can we read in the letter. The next AI summit will take place in France in early 2025.

ccn1