American flexibility, by Robin Rivaton – L’Express

American flexibility by Robin Rivaton – LExpress

Artificial intelligence (AI) is a complex technology because, unlike the Internet or microcomputing, it carries the risk of the extinction of humanity. Among the scenarios floated are the idea that a terrorist organization would use it to create an ultimate biological weapon, or that AI would inadvertently destroy humanity, just as we humans have driven other species to extinction by lack of awareness of the effect of some of our actions. These arguments are obviously frustrating. As AI research honcho Andrew Ng says, no one can prove that radio waves emitted from Earth won’t lead aliens to find and wipe us out. It is in this context that the regulation of AI must be understood.

This fall, it saw two major advances with the joint declaration following the summit on AI security organized by the United Kingdom as well as, a few days earlier, a much-anticipated decree from the House- White on artificial intelligence. The European Union and China have already made progress on the subject, but the United States being the country of residence of all the major players in AI, from Gafam to innovative companies such as OpenAI or Anthropic, including open source libraries like Hugging Faces, its position was particularly expected.

On October 30, President Biden signed an executive order for the safe, secure, and trustworthy development and use of AI. The decree presents eight guiding principles, including those of protecting privacy, defending consumers, but also advancing American leadership abroad.

While this executive order also builds on previous guidance on AI, such as the White House National Plan for a 2022 AI Bill of Rights and the AI ​​Risk Management Framework, AI of the National Institute of Standards and Technology (Nist), its degree of constraint with regard to the European framework was scrutinized. It will be more flexible, even much more flexible. The United States will have some government oversight over the most advanced AI projects, but there will be no licensing requirements or rules requiring companies to disclose training data sources, model size and other important details. All generative AI projects and predictive models will be affected, if they meet two cumulative conditions: present risks to national security, economic security or health; have been trained on an amount of computing power greater than 10 to the power of 26 floating point operations per second (flops). They will then have to provide agencies dependent on the federal government with reliable and reproducible tests whose results can be made public.

The Nist will develop a set of standards for the tests of these models by August 2024. The decree remains silent on the follow-up that the government could give to the communication of these tests. The threshold of 10 to the power of 26 flops was set after numerous discussions with industry giants, who wanted to avoid a barrier preventing the ability to innovate. It is approximately 100 times greater than the power mobilized by Meta’s flagship model, available in open source, LLaMA 65B. Given the pace of development of the industry and the effects of scale, this barrier could be reached in 3 to 5 years. In other words, while the European regulations will come into force in 2025, the American ones could wait until 2027.

The main fear expressed by American companies was the risk of being overtaken by other less cautious countries. This was taken into account, as was the idea that local regulation matters less than global governance. The ability to dialogue with other large states, first and foremost China, is necessary. This challenge falls to Vice President Kamala Harris, whose role now encompasses the need to address the full range of AI risks. Kamala Harris was one of the 28 signatories, alongside the United Kingdom, China and the European Union, of the Bletchley Declaration, the conclusion of the summit on AI security, which reflects a consensus on the need for regulation.

Robin Rivaton is Managing Director of Stonal and member of the Scientific Council of the Foundation for Political Innovation (Fondapol).

lep-general-02