Yuval Noah Harari: “Artificial intelligence threatens the survival of human civilization”

Yuval Noah Harari Artificial intelligence threatens the survival of human

Fears related to artificial intelligence have haunted humanity since the beginning of the computer age. Until now, these fears have focused on machines using physical means to kill, enslave, or replace people. But, in recent years, new computer tools have appeared, and they threaten the survival of human civilization in an unexpected way. AI has acquired remarkable abilities to manipulate and generate language, be it words, sounds or images. Artificial intelligence has thus hacked the operating system of our civilization.

Language is the raw material of almost all of human culture. Human rights, for example, are not in our DNA. Rather, they are cultural artifacts that we have created by telling stories and writing laws. The gods have no physical reality. Again, these are cultural artifacts that we have created by inventing myths and writing sacred texts. Money, too, is a cultural artifact. Banknotes are just colored pieces of paper, and today more than 90% of money is not even banknotes anymore, but digital information stored in computers. What makes money valuable are the stories that bankers, finance ministers or cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all great storytellers.

What will happen when a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing pictures, and writing laws or scriptures? When people think of ChatGPT and other AI news, they are often drawn to examples like school children using these tools to write their essays. What will happen to the school system when children do this? But such questions miss the point. Forget school homework. Think of the campaign for the next US President in 2024 and try to imagine the impact of AI that can be used to mass-produce political content, fake news and sacred texts for new cults.

A battle for our privacy

In recent years, the QAnon cult has coalesced around anonymous messages posted online, posts signed with the letter “Q”. Followers have collected, revered and interpreted these messages as if they were a sacred text. If, as far as we know, all of Q’s posts were written by humans and the bots have only contributed to their diffusion, it could be that in the future we will witness the birth of the first cults whose sacred texts will be written by a non-human intelligence. Throughout history, religions have claimed a non-human source for their holy books. This could soon become a reality.

More prosaically, we may soon find ourselves having long online discussions about abortion, global warming, or the Russian invasion of Ukraine with entities we think are humans, but are actually AIs. The catch is that it’s totally pointless for us to spend time trying to change the stated opinions of a conversational AI. On the other hand, the AI ​​could refine its messages with such precision that it would have a good chance of influencing us.

With its mastery of language, the AI ​​could even form intimate relationships with people and use the power of intimacy to change their opinions and worldview. Although there is no indication that the AI ​​has a conscience or feelings of its own, to encourage false intimacy with humans, the user need only feel emotionally attached to it. In June 2022, Google engineer Blake Lemoine publicly claimed that the chatbot Lamda, which he was working on, had become sensitive. This controversial statement cost him his job. What is most interesting in this episode is not Mr. Lemoine’s assertion, which is probably false. Rather, it is his willingness to risk a lucrative position on behalf of a chatbot. If AI can make people risk their jobs for it, what else could it make them do?

In the political battle for minds and hearts, intimacy is the most effective weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the next generation of AI, the front is shifting from attention to intimacy. What will happen to human society and our psychology when AIs fight each other to create false intimate relationships with us, relationships that can then be used to convince us to vote for such a candidate, or to buy such product?

Even without creating “false intimacy”, the new tools of AI will have a considerable influence on our opinions and on our visions of the world. People could come to follow the advice of only one AI, like a single, all-knowing oracle. No wonder Google is terrified. Why still bother using a search engine, when all you have to do is ask the oracle? The media and advertising sectors should also be appalled. Why read a newspaper when I can ask the oracle to give me the latest news? And what are the ads for, when I can just ask the oracle to tell me what to buy?

extraterrestrial intelligence

And even those scenarios don’t capture everything that’s at stake with this technology. What we are talking about is potentially the end of human history. Not the end of history, but the end of its male-dominated part. History is the interplay between biology and culture, between our physical needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process by which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture, and begins to produce stories, melodies, laws and religions? Previous tools, such as the printing press and radio, helped spread the cultural ideas of humans, but they never created new cultural ideas of their own.

At first, the AI ​​will likely mimic the human prototypes it was trained on in its early days. But, over the years, AI culture will boldly go where no human has gone before. For millennia, human beings have lived in the dreams of other humans. In the decades to come, we may find ourselves living in the dreams of extraterrestrial intelligence.

The fear of AI has only haunted mankind for a few decades. But, for thousands of years, humans have been haunted by a much deeper fear. We realized the power of stories and images to manipulate our minds and create illusions. Therefore, since ancient times, humans have feared being trapped in a world of illusions.

In the 17th century, René Descartes was anguished by the idea that a malevolent demon was locking him up in a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous allegory of the cave, in which a group of people are chained all their lives in a cave, facing an empty wall. A screen. On this screen, the prisoners see various shadows projected. They take these illusions for reality.

In ancient India, Buddhist and Hindu sages pointed out that all humans lived trapped in Maya, the world of illusions. What we normally take for reality are often just fictions in our own mind. People can wage wars, kill other people and accept being killed themselves, because they believe in such and such an illusion.

The AI ​​revolution confronts us with Descartes’ demon, Plato’s cave, Maya. If we’re not careful, we risk being trapped behind a curtain of illusions that we won’t be able to tear away – or even suspect that they exist.

Of course, the new power of AI could also be used for positive purposes. I will not dwell on this point, because the people who develop these technologies talk about it enough. The job of historians and philosophers, like me, is to point out the dangers. But it is certain that AI can help us in countless ways, from finding new cancer treatments to solutions to the ecological crisis. The question is how to ensure that these new tools are used for good rather than evil. To do this, we must first be aware of the true capabilities of these innovations.

As for nuclear

Since 1945, we have known that nuclear technology can produce cheap energy for the benefit of human beings, but it can also physically destroy human civilization. We have therefore reshaped the entire international order to protect humanity and to ensure that nuclear technology is used primarily for good. Today we face a new weapon of mass destruction that can annihilate our mental and social world.

We can still regulate these new AI tools, but we must act quickly. While nukes cannot invent more powerful nukes, AI can create exponentially more powerful AI. The crucial first step is to require rigorous security checks before these tools are released into the public domain. Just as a pharmaceutical company cannot bring new drugs to market until they have tested their short- and long-term side effects, tech companies should not bring new AIs to market until they are safe. We immediately need an equivalent from the Food and Drug Administration [NDLR : l’organisme de contrôle des médicaments et des denrées alimentaires aux Etats-Unis] for these new technologies.

Isn’t the slowdown in the public deployment of these AIs likely to cause democracies to lag behind authoritarian regimes that are much more repressive? Quite the contrary. Unregulated deployments of AI would create social chaos, which could only benefit autocrats and ruin democracies. Democracy is a discussion, and discussions are about language. If an AI hacks the language, it could destroy our ability to have meaningful discussions, and therefore destroy democracy.

We have just encountered extraterrestrial intelligence here on Earth. We don’t know much about her except that she could destroy our civilization. We should stop the irresponsible deployment of AI in the public sphere and regulate it before it regulates us. The first regulation I suggest would be to require the AI ​​to reveal that it is an AI. If I’m chatting with someone and I can’t tell if it’s a human or an AI, that’s the end of democracy.

This text was generated by a human.

Or was he?

* Yuval Noah Harari is a historian, philosopher and the author, in particular, of sapians, of Homo Deus and children’s series We, the indomitable ones, all translated by Albin Michel. He is a professor at the Hebrew University of Jerusalem and the co-founder of the organization Sapienship. Copyright The Economist Newspaper Limited, London, 2023.

lep-sports-01