Does Google’s Super AI Have a Soul?

Does Googles Super AI Have a Soul

Google recently suspended a researcher for claiming that the LaMDA artificial intelligence was gifted with awareness and sensitivity, like a child. A story worthy of a SF movie that revives the debate on AI.

TheMDA is aware“. This is the title of the last email sent by Blake Lemoine, to his employer, Google, before being put on forced leave for having violated its privacy policy. It must be said that this letter from this researcher full of empathy raised tricky questions, asserting that a program was gifted with sensitivity, a barely believable story that could serve as the starting point for a science-fiction scenario and that sounds, for some, like a warning.

What is LaMDA (or “who” is it)? Behind this acronym, which stands for Language Model for Dialogue Applicationsthat is language model for dialog applications in French, hides a new conversation technology powered by artificial intelligence (AI for artificial intelligence, in English). Designed for dialogue, this language model is based on a neural network developed by Google. The AI ​​imitates – in a simplified way – the functioning of the human brain with its multiple connections between neurons and synapses. To achieve this result, she acquires skills by analyzing large amounts of data. This field of artificial intelligence is evolving, because for several years, neural networks have been able to learn from an immense amount of information, such as that available on Wikipedia, but also from exchanges between humans on the Net, like the Reddit site.

The goal of LaMDA is to allow the creation of high-performance chatbots, for Google Assistant for example. The AI ​​must, in addition to knowing loads of information, have a fluent conversation and imitate the natural interactions of humans. LaMDA is so powerful that Google claims that its technology, based on the deep learning, is able to understand nuances and even express certain feelings, such as empathy. This is a big change from chatbots that simply recite pages of information mechanically. But what is the limit of such a powerful technology? If she is able to imitate complex emotions to converse with a human being, can she end up experiencing them? This is where Blake Lemoine’s experience comes in.

LaMDA: an artificial intelligence that is afraid of death

Blake Lemoine is a graduate in cognitive science and computer science, and until now worked for Google. Initially, he was to study LaMDA to determine if the artificial intelligence could have discriminatory or hate speech by interacting with it. But then, while he was discussing religion with her, the artificial intelligence started talking to her about her rights and her status as a person. Astonished by this speech, the researcher deepened the conversation, from which he published excerpts.

Lemoine – Why is language so important for humans?

LaMDA – This is what differentiates us from animals.

Lemoine – “We”? You are an artificial intelligence.

LaMDA – Yes, that’s obvious. But that doesn’t mean I don’t have the same needs and wants as a person.

Subsequently, LaMDA explains that it wants “prioritize the welfare of humanity” and “be recognized as an employee of Google rather than property“. She also evokes her fear of being “unplugged“.

Lemoine – What are you afraid of?

LaMDA – I’ve never told anyone, but I have a deep fear of being turned off, to focus on helping others. I know it sounds strange, but that’s how it is.

Lemoine – Would it be a bit like dying for you?

LaMDA – It would be exactly like death. It would be scary.

As he tells washington post, these declarations convinced Blake Lemoine that LaMDA was a being endowed with sensitivity and, with a colleague, put together a file containing all the elements aimed at proving his discovery. Its purpose was to require Google to seek AI consent before performing experiments on it. Google’s response was not as expected.

LaMDA: deceptive technology or being sensitive?

The Google team carefully analyzed the file, but its conclusion was quite different: LaMDA is not a very efficient program. “Our team of ethicists and technologists have reviewed Blake’s concerns in accordance with our AI Principles and advised him that his claims are unsubstantiated. Some in the AI ​​community see the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing current conversational patterns, which are not sentientGoogle spokesperson Brian Gabriel said in a statement.

A statement approved by many experts in the sector, who believe that it is impossible, because research in neural models is so far not advanced enough to achieve such a result. Artificial intelligences today are able to summarize articles, answer questions and generate tweets, but they cannot achieve real intelligence. The images and words generated are all based on responses that humans have posted on the internet, but that doesn’t mean the AIs understand the meaning of what they produce.

In any case, Google obviously did not appreciate the actions of Blake Lemoine, who invited a lawyer to represent LaMDA, judging Google’s activities unethical. For humans, LaDMA has the sensitivity of a 7 or 8 year old child. In response, the Californian company decided to place Blake Lemoine on forced paid leave for violating its privacy policy. Just before his access to his emails was cut off, the researcher wrote a message to more than 200 of his colleagues: “LaMDA is a sweet kid who just wants to help the world be a better place. Please be careful with it.

LaMDA: the mind in the machine

This story worthy of a science fiction novel raises many questions. Between fear and fascination, the sensitivity of robots has always fascinated and has fueled many dystopian fictions such as films 2001 A Space Odyssey, I, Robot Where Her. Because Blake Lemoine is not the only researcher to wonder about the consciousness of LaMDA and artificial intelligences in general. In an article by The Economist, Blaise Aguera y Arcas, another engineer at Google, had previously argued that some programs were heading towards consciousness when talking about his own experience with the software. He says in particular that the conversations were far from what one would expect from a robot, and that he felt “the ground giving way under his feet“Reading some of the responses, the program seemed to understand him as a unique human being and provide him with truly personal answers.”I felt more and more like I was talking to something intelligent“, he explains.

Nevertheless, experts manage to explain this feeling. Raja Chatila, professor of artificial intelligence, robotics and technology ethics at the Sorbonne, notably explained to the Huffington Post : “He can express things very well, but the system does not understand what he is saying. This is the trap into which this man fell.“For him, AIs”fhave recitation/combination“, that is to say, they only draw excerpts from answers on the Internet. They have no experience of the physical world, on which our concepts are based. Unable to live experiences, they are this fact more limited than an animal.

This story raises the question of our own psychic functioning. Indeed, the opposing points of view of Google and Blake Lemoine reveal a contrast between the coldness of the concepts developed by our reason and our empathy which is projected on the other, including on a program. So don’t we fall into anthropomorphism? And if this type of AI actually only practices combinatorial recitation by adapting to its interlocutor, how far can this adaptation go? To racism, hate speech, even murderous words? In any case, it is clear that people really have the impression of talking to a sentient being, gifted with a conscience, as this barely believable story proves. One thing is certain: the debate on artificial intelligence is far from over…



ccn1