Anthropic, one of the most serious competitors of the company OpenAI – creator of ChatGPT – recently announced the recruitment of its first expert dedicated to the “well-being of artificial intelligence”. This specialist will evolve within teams already formed dedicated to “alignment sciences”, disciplines in full development, the aim of which is to study the coherence of the objectives, intentions and actions of AI with the values, human preferences and expectations.
The question of well-being of AI is articulated with those, broader and more ambitious, moral considerations that we should guarantee to new acting, thinking, even conscious software entities. The expert hired by Anthropic considers, like other thinkers or practitioners of AI, that technical evolution tends to make “realistic” the possibility that such levels of sophistication will be quickly reached. Some are talking about 2026, or even 2025! In other words, we would be at the dawn of a major anthropological revolution.
The consequences of such an hypothesis are dizzying from the point of view of moral philosophy as a whole. Man would lose his ontological privilege, the one which makes him the only species conscious, a privilege that he would exchange for that, paradoxically, of a non-demiurgic “creator”, subject to the risk of emancipation of his own creation. In the extreme scenario of achieving strong (conscious) AI and even in the scenario of AGI (artificial general intelligence) – not conscious but capable of simulating “robust” levels of consciousness and agency, it that is to say advanced – it would be up to us to rethink our rights and our duties towards them.
New Valladolid controversy
Such developments are part of a substrate of demands advocating the return of the ubiquity of politics in all fields of existence (private life, human relations, business, etc.), this politics itself being even subservient to morality: the militant recriminations of neo-feminism, decolonialism, and environmentalism attest to this. However, if everything is political and if politics is conditioned by morality, everything becomes moral.
In this perspective, emerging reflections on the alignment between the integrity of humans and those of advanced AI are deployed concomitantly on the hybrid field of politics, morality and the normative. We cannot understand the growing influence of anti-speciesist and transhumanist speculations on our Western societies without realizing that they are in fact underpinned by the fear, in anticipation, of the potential Copernican revolution relating to the place of man in society. ‘universe.
A new Valladolid controversy is brewing, quietly. A controversy where it would be a question of defining the conditions of the humanity of non-human entities to situate them in relation to us. However, we are not dealing with uncreated entities, but with technical productions. That the fine workings of their functioning already escape us – it is indeed impossible to explain the behavior of the advanced AIs we have – does not imply that we have no superior rights vis-à-vis them. These rights derive directly from our fundamental duties towards all of humanity: respect for their dignity, maintenance of their integrity and the pursuit of their freedom.
When prophets are also kings
In a society marked by complexity and uncertainty, the prospective approach is healthy. It involves reasoning ex ante in order to consider what could happen, without certainty, but with the analytical rigor and prudence that are necessary. This is not how the new prophets of technology do it; their speeches are often inadequately peremptory and falsely assertive. As a result, many commentators are wrong when they believe that Sam Altman (Chief Executive Officer and Founder of OpenAI) predicts the arrival of AGI in 2025 arguing that he “now knows how to do it”, or that Dario Amodei (from Anthopic) mentions the high probability of a very close completion horizon. Knowing what to do, “knowing the pitfalls” does not mean “having the assurance of achieving it”.
In detail, neither announces; they speculate by extrapolation from past successes. However, technical progress does not proceed from a linear evolution: like biological progress, it makes leaps. In other words, doubling computing power will not double the probability of AGI occurring, much less that of strong AI. “The future is not given” affirmed Prigogine; when the prophets are also the kings, that is to say those who decide part of the future, we should be all the more wary of their warnings. They are in fact performative: their very statement shapes the future.
*Sami Biasoni has a doctorate in philosophy from the École normale supérieure, lecturer at ESSEC and lecturer. He directed the collective work Discomfort in the French language and published The statistically correct published by Éditions du Cerf.
.