French philosophers passionate about science while seriously inquiring about major technological advances are not many. In AI, great replacement or complementarity*Luc Ferry brilliantly sums up the great philosophical, ethical and political questions posed by artificial intelligence. The former Minister of National Education makes a moderate voice heard, between on the one hand the intellectuals who cry out to disaster, and on the other the techno-optimists who ensure that IA will save the world.
While Paris welcomes a summit on the subject, Luc Ferry explains why AI will have major consequences on the job market, but also why the prospect of downloading his brain when you are dead, as Sam Altman wishes, is illusory.
L’Express: Regarding artificial intelligence, French intellectuals often sin, tell yourself, by denial (“AI will never replace us, we are too essential”) or by progressophobia (“the AA threat l ‘Humanity, it must be stopped “). What do you blame these two currents?
Luc Ferry: I criticize the first for ignoring the dazzling progress of multimodal AIs, failing to access the latest versions, and the second for swimming in full regulatory utopia. In the first chapter of my book, I analyze the current performance of generative AI. If we take the trouble to subscribe to the latest versions, they exceed humans very far in many areas: in medicine, in legal analysis, writing press articles, translation but also, now, in solving math or physics problems. I am not completely uneducated but Chatgpt is millions of times more cultivated than me and the algorithms at its disposal to analyze the knowledge that has been ingested is increasingly efficient. Sam Altman said for this reason that we would see appearing in the decade which comes from “unicorns” without any employee.
I add that AI will not only replace millions of white collars but also blue passes since it is now integrated into humanoid robots which are perfectly capable of replacing workers from the construction.
As for stopping to take the time to think, it is very nice, but if we did it in the West for moral reasons, China, Russia and the Theocracies would take the opportunity to accelerate as never knowing that those who will master the ‘The world will dominate the world, including military level …
Conversely, you also criticize scientists and entrepreneurs who are pouring into an overly optimistic techno-solutionism. For what ?
Because AI, unlike humans, do not choose their values, they are “aligned” on ethical codes by programmers who choose them for them. I like science, I taught in the pointed science of Paris on the theme “Biology and philosophy” with my friend Jean-Didier Vincent and it was there that I discovered the role of AI in the health field, especially cancer where it will save millions of lives. Often, today’s philosophers are badly knowing the sciences, but we cannot seriously speak of AI without having worked on the scientific level. Now science does not have as such values. She can tell you that smoking causes cancer, not whether or not you have to quit smoking! The technique cannot therefore solve all our problems because it all depends on what we choose to do with it …
You lean on the side of those who believe that AI will have a colossal impact on the job market. Has not each new technology generated similar fears? And won’t AI create new jobs?
The AI revolution has nothing to do with those of the past, steam, electricity or from the engine to explosion, first because it affects all sectors of human life and not an area Particularly, then because she challenges the human being in what was so far his monopoly: intelligence and language. She will of course create new jobs, but they will be so sophisticated that they will never replace those she will delete in quantity. I passed a psycho license when I was a kid and I made I Qi tests in Chatgpt myself: he has more than 150, which puts him above the most intelligent of the human population. We can of course criticize these tests, but they still give us indications …
In fact, as your first question suggested, AI is so afraid that you put your head in the sand when you have to wake up! Tech bosses plead for the basic universal income) in order to calm people that AI will put unemployment, but it would be a disaster. Sam Altman financed a study on the subject for two years by paying $ 2,000 a month of the unemployed to stay at home. Disaster ! Antidepressants, alcoholism, suicides. It will be necessary to organize complementarity wherever it is possible and when it is not, I propose in my book to develop a civic service for adults on the model of the one I created in France for young people …
For decreasing theorists, it would be paradise on the contrary …
For some theorists of decrease, the end of work would indeed be the best news in the millennium. The study I just mentioned shows that it is quite the opposite. Without work, we do not only lose social integration, we also risk losing self -esteem because we no longer progress …
“” “Locking up for eternity a brain in a noosphere would not be a definition of paradise but of hell»
Raphaël Enthoven, who passed his philosophy bac against Chatgpt, assured the year The last an artificial intelligence can never compete with a human in the field of philosophy. You are much more circumspect than him. For what ?
Raphaël, for whom I have only consumed and friendship for over thirty years, says Urbi et Orbi two ideas that I do not share: Primo, he says that philosophy consists in building issues and secondly, that AA Even in ten thousand years will always be unable to build it. I think exactly the opposite, namely that philosophy has nothing to do with the construction of issues and that in any hypothesis, an SML (Small Language Model) to which we would have swallowed 1,000 philosophy dissertes written by normal aggregates would obviously be capable with a little “training” to build superb issues and even pass the aggregation.
However, philosophy does not consist in making dissertations in three parts but, in all the great thinkers who have marked its history, in answering three fundamental questions: that of knowledge (of the different types of truth), that of values ( morals, political, aesthetic) and finally, as its name suggests, that of wisdom (of life good in any sense that it can be heard). The AI will never be a philosopher, at least as long as it will remain a weak AI because, as I have already suggested, the LLM (large models of language) do not choose their values themselves, they are “aligned” On philosophical and ethical codes. How, ultimately, the question you ask refers us to that of free will, that is to say that of the free choice of values. If I was a spinozist and I kept the idea of free will for “delusional”, I would conclude that there will soon be no difference between AI and humans. For a determinist, in fact, we are just as “aligned” as intelligent machines, on our social environment (Bourdieu), on our genes (Changux), on our family history (Freud), etc. This is why I devote in my book a chapter to the criticism of determinism which is in reality only a smoky, “non -falsifiable” metaphysical idea as Popper says …
We often confuse act (general artificial intelligence), which seems inevitable in the years or decades to come, and strong, that is to say an intelligence endowed with conscience. Why don’t you believe in the latter?
I am in groups of engineers and almost all believe that one will reach a strong AI, therefore in conscious machines and endowed with emotions, that is to say a posthumanity which will be immortal since it does not will be more embodied in a perishable biological body. They believe in it because they are spinozists and materialists, therefore deterministic, and that as such, they think that we are already nothing other than machines.
For the reasons that I have already indicated to you, I do not believe it, because ultimately, the difference between the human and the machine does not relate to intelligence but on the choice of values. You can be great like Heidegger and being Nazi, intelligent like Foucault or Sartre and being a Maoist or a great admirer of Khomeyni. The choice of values is not a matter of intelligence but it supposes freedom and affects what machines will never have, at least as long as they are not hybridized with living things, with humans …
What are the differences between transhumanism and posthumanism? And why do you mock those who, like billionaire Marc Andreessen, dream of achieving immortality through digital intelligence?
I had already specified things in my book There Transhumanist revolution. The latter only targets one thing, fight against old age, even “heal” it to make sure that one day one day to live 200 years (or more …), young and healthy. Cell reprogramming and senolytic will certainly make it possible, all scientists who work seriously on the subject are convinced, this is only a matter of time.
The posthumanist targets something else. He seeks to make a digital double, a digital twin of ourselves who could live forever in the IA noosphere. Those who believe in it think like Spinoza that if man is fatal, he is nevertheless eternal as “degree of power” in divine understanding. The IA noosphere plays the same role for posthumanists as the divine understanding of Spinoza. It is in this sense that Sam Altman asked that his brain is downloaded in the noosphere when he is dead so that he continues to “live” there eternally. I keep telling my friends who share this ideal that their digital twin is not them and that locking it for eternity in a noosphere would not be a definition of paradise but of hell …
* AI, great replacement or complementarity ?, by Luc Ferry. The observatory, 325 p., € 23.
.