This American bill that scares businesses – L’Express

This American bill that scares businesses – LExpress

Two letters and four numbers have been creating panic in Silicon Valley for several months: “SB 1047.” An acronym that refers to a Californian bill, the aim of which is to regulate the artificial intelligence industry. Largely inspired by the European Union’s AI Act, the text aims to prevent the excesses of AI, while allowing companies to innovate. Industry leaders have expressed alarm at this increase in regulation.

Since then, an intense battle has been waged between industry lobbyists on one side and the California legislature on the other, which could rule at the end of August.

A text to prevent abuses

The bill seeks to hold companies accountable in several areas. It would prohibit developers from marketing or making available their artificial intelligence models if they risk “causing critical harm.” The text would also require AI creators to conduct an annual audit to confirm that their work meets the requirements of the law.

READ ALSO: Tomorrow, everyone super-efficient? These three fantasies around artificial intelligence

It would also require reporting to a government-set-up body “any security incident related to artificial intelligence,” and would provide protections for whistleblowersthose employees who witness bad practices in their company and decide to denounce them. Finally, California would require companies to set up a “kill switch”, a device capable of immediately taking offline an artificial intelligence model that presents a danger.

“These rules would come into force as soon as the models are designed,” recalls Nathalie Beslay, a lawyer specializing in the regulation of tech and artificial intelligence. “The bill considers these models as a product, and not as tools. Hence its requirements for quality, security and reliability towards these AIs, to neutralize the risks linked to hallucinations and misuse.”

The text primarily concerns “high-powered” models, says Nathalie Beslay. It targets those whose training cost has exceeded $100 million – which is average for models like ChatGPT. However, some experts believe that within a few years, running LLMs [NDLR : grand modèle de langage] could cost up to several billion dollars. The law would therefore affect the vast majority of AIs.

Tech lobbies have obtained adjustments

This is precisely what scares companies in the sector: the application of this law would radically change the way most language models and generative AI are developed. Last June, the startup accelerator Y Combinator shared an open lettersigned by about 100 people from the industry, saying that “the bill could unintentionally threaten the dynamism of California’s technology economy and harm competition.” The authors called on the legislature to abandon certain measures, or to soften them.

READ ALSO: AI Act, the compromise that pleases no one: regulatory overdose and French lamentations

OpenAI, long silent in this controversy, finally spoke out on August 21. In a letter sent to the senator who originated the text, the company explains that “SB 1047” risks “slowing the pace of innovation” and “encouraging California entrepreneurs to leave the state in search of better opportunities elsewhere.” This statement is particularly surprising, given that the company’s leaders have always called for the sector to be regulated. In 2023, during a hearing in the US Senate, Sam Altman, one of the co-founders, urged the government to take measures to regulate artificial intelligence. More recently, OpenAI said it was concerned about the emotional dependence that its AI could create in some users.

Like the European start-ups that had won several amendments to the AI ​​Act, their American counterparts have won on certain points, reveals the TechCrunch website. The bill no longer allows the prosecutor to prosecute companies for negligence before an accident has occurred. On the other hand, it will be possible to ask them to cease their activity, or to turn against the developers, once the accident is proven. The creation of a government control agency has also been abandoned.

A law soon to be obsolete?

Startups’ fears are not entirely unfounded. Legislators often struggle to grasp all the intricacies of their technical innovations. As AI advances at breakneck speed, these laws may soon become obsolete. The European regulation, for example, defines models as “presenting a systemic risk” when they have a computing capacity greater than 10 to the power of 25. Flop. This threshold protects the smallest players: it is currently only exceeded by OpenAI’s ChatGPT and Google’s Gemini. But it will probably be easy to cross in a few yearsThe scope of the text’s action is therefore likely to be greatly expanded.

READ ALSO: OpenAI: What the serial departures of founders hide

The same criticism is being heard for “SB 1047”. “Given the rapid advances in computing, it is likely that in a short time, the current threshold set by the legislation will be exceeded, including by start-ups, researchers and academic institutions,” explains Mozilla.

Some of the risks raised by AI players are, however, exaggerated. The bill will, of course, generate additional costs, but “the budget for AI training is already so high that the cost of regulation is tiny,” notes Nathalie Beslay. As for the risks of exile, they are, according to the expert, overestimated: “American companies have not fled Europe despite the GDPR, the data protection regulation.”

“It’s always the same debate when we talk about regulating technologies, we oppose regulation and innovation,” continues Nathalie Beslay. A simplistic approach, notes a report from the National Bureau of Economic Research. According to the research organisation, while a more regulated economy is indeed likely to produce less innovation, when its firms do innovate, “they tend to make more radical, labour-saving breakthroughs”. The European Union sometimes has more flair than it is given credit for.

.

lep-sports-01