Ban ChatGPT? Poor responses to the dangers of AI, by Frédéric Filloux

IA will your job be replaced by ChatGPT

Today, in 2023, an artificial intelligence left to its own devices is capable of lying, manipulating, deceiving, concealing to achieve its ends. These facts are verified and demonstrated. When OpenAI engineers introduced ChatGPT version 4, they conducted a series of tests. One of them consisted in asking him to solve CAPTCHAs, these checkerboards of images which are precisely used to distinguish a human from a robot when one identifies oneself on a site. An easy task for a person, impossible for a machine that does not manage imprecision very well. ChatGPT has therefore outsourced the request to third parties – humans. When one of them asked him if he was a robot, ChatGPT retorted that he was indeed a person, but visually impaired, hence his difficulty in managing CAPTCHAs.

AI is worrying because its creators are already overwhelmed by its power and its basic users do not understand it. The big difference between the two is that the former are aware of the problem and the latter don’t care.

Let’s stop at this quote: “At the moment, no one knows how to train a powerful artificial intelligence system to be reliable, honest, and safe. The rapid progress of AI […] risk triggering a race where companies and nations will develop uncertain systems. The result would be catastrophic if these AIs pursued dangerous objectives, or if they multiplied errors in high-risk contexts.” These words are taken from a long text entitled Core Views on AI Safety produced by Anthropic. This company was created in 2021 by Dario and Daniela Amodei – brother and sister – who dissented from OpenAI for ethical reasons. In two years, they raised $1.3 billion in venture capital, enabling them to recruit top talent.

The warning of the founders of Anthropic was followed by the famous open letter calling for a pause in experiments on artificial intelligence signed by more than 1,000 experts. In a great demagogic outburst, the Italian government has decided to ban the use of ChatGPT, prompting some French parliamentarians to recommend the same measure.

The illusion of self-regulation

If the fears are justified, none of these responses makes sense in view of the reality of the sector. First, a six-month break will not change the pace of advances in artificial intelligence. It is ridiculous to imagine thousands of engineers and scientists shutting down their computers to take the time to reflect like Buddhist monks. Since 2020, $75 billion has been injected into these start-ups. To give an idea of ​​the scale of this financial blitz on AI, French Tech raised 27 billion euros during the same period. The economic pressure to “deliver” is therefore immense and the race for supremacy, ruthless.

Second argument: the multiplicity of AI systems. Today, there are a dozen LLMs (Large Language Models), and at least twice as many if we add those that are not public, without even counting the hundreds of derivatives by trade. Banning just one is therefore ineffective.

Third caveat: the pace of innovation. The scientific literature on AI is growing with articles published every month. The models themselves are constantly evolving. Either because their creators make adjustments or because they are designed for it. GPT-4 thus uses a system of “learning reinforced by human interaction” (Reinforcement Learning from Human Feedback, or RLHF). So he’s constantly improving – a big task, nonetheless.

Finally, this moratorium assumes that the technology sector is able to self-regulate. However, history proves that no industry has ever shown the slightest capacity in this direction. The desire for self-control gives way always the step to economic competition.

So, how to do ? The next European regulation, the IA Act, opens a way, but as it stands, the text will be difficult to put into practice. Another solution would be for AI manufacturers to decide to create a rigorously independent body, straddling the United States and Europe. It would be sufficiently funded to attract high-level people – therefore very well paid – whose mission would be to sift through and test in real time what their peers are creating in startups. Complicated, but doable.

lep-general-02