a miracle method so that they no longer make mistakes? – The Express

a miracle method so that they no longer make mistakes

This is the Achilles heel of generative AI. The defect that all companies in the sector are trying to correct. The answers from ChatGPT and others to our questions are always well turned out. Unfortunately, they are not always accurate. They can invent scientific publications with aplomb. Narrate the reign of kings who never existed. Solving this problem would be a decisive turning point for the sector. This would disrupt nothing more and nothing less than the way in which we can learn and obtain information. Reason why the AI ​​sphere is ablaze at the moment for three little letters: “RAG”. An acronym designating a promising method: Retrieval Augmented Generation.

To understand this revolution, we must first look at why generative AIs sometimes say completely false things. The large generative language models (LLM) on which tools such as ChatGPT or Gemini (formerly Bard) are based are trained on huge text databases. What do they learn on it? “To predict the most likely next word when we give them the beginning of a text,” explains Benoît Sagot, researcher specializing in automatic language processing at the National Institute for Research in Computer Science and Automation (Inria). “Step by step,” continues the expert, “the AI ​​predicts one word, then the other, sometimes to the point of producing long texts.”

READ ALSO: Mistral AI: the three challenges awaiting the French prodigy

If these generative artificial intelligences are surprisingly good at holding a conversation, it is not because they understand what is said to them or are able to reason. This is, in reality, linked to a stage of their construction called “RLHF”, reinforcement from human feedback. “It is done in three steps. First, we show the AI ​​numerous examples of dialogues. Secondly, we send it evaluations made by humans of the quality of the responses of its older versions. Finally, the AI ​​will train itself to predict this human feedback, so in a certain way it will learn to self-evaluate its responses,” explains Benoît Sagot.

The RAG method helps AI not to make mistakes

But once the training is complete, the generative AIs are no longer connected to the immense corpora of texts they have swallowed. “They are a bit like students who have revised in the library on the day of the exam: they do not have access to the books and answer in their heads,” illustrates the researcher. On a daily basis, humans use all the means at their disposal (Internet, books, reports, etc.) to work more efficiently. So why not let AIs do the same? This is where the “RAG” method comes in. It specifically proposes to connect generative artificial intelligence to documentary databases. When we question the AI, it can therefore look for information related to our sentence. Behind the scenes, it will then enrich our question with context elements unearthed from this database, thus increasing the probability that the answer generated is accurate.

READ ALSO: Generative AI: “There is nothing magical in it, only mathematics”

This is not the only virtue of the “RAG” method. It also anchors AIs in the news, an area in which they are currently floundering. Because the computing power needed to train an AI is very expensive and this process takes time. Once this phase is completed, it cannot be repeated constantly in order to add recent data to the training base. This is the reason why the version of ChatGPT backed by GPT 3.5 is not relevant for events after January 2022, and that of GPT-4, for news more recent than April 2023.

Combining the power of LLMs with documentary bases

The “RAG” method also makes it possible to build a bridge between AI and often confidential company documents. With the AI ​​start-up Dust, the Malt group, which connects businesses and freelancers, has created conversational robots capable of searching its internal documentation. “However powerful it may be, a GPT-4 alone would not have the capacity to properly inform our employees about the functioning of Malt or our pricing system because the Open AI LLM has not been trained on these information. Our chatbots can, however, identify internal documents related to their question and provide them with an informed response,” explains Claire Lebarz, vice-president of Data and AI at Malt. All while offering a response as concise or detailed as desired, because these chatbots are also backed by the latest LLMs from companies like OpenAI, Google or Mistral.

READ ALSO: OpenAI, Google, Apple… Why the big names in AI need the media

Currently being tested, they will each address a different department (customer relations, HR, sales, support, etc.) at French Malt, which employs 600 employees. “This can save our employees a lot of time. Especially since we are present in 9 countries, and the teams have to juggle a multitude of markets and national regulations,” confides Claire Lebarz. For new recruits, such chatbots are also valuable intelligence tools. “Creating chatbots with Dust is quick,” explains Claire Lebarz. “The key is to properly build the documentary bases to which they will be linked. And to find the people who will be able to update these bases when necessary.”

Does the RAG method guarantee the reliability of generative AI responses? “Not 100%,” warns Benoît Sagot, “but it is improving significantly.” Above all, the user will be able to more easily assess the accuracy of the elements presented, by asking the AI ​​the sources taken into account to formulate the answer.

Meta AI researchers

Augmented recovery generation techniques were designed in 2020 by Meta. But at the time, generative AIs were not as capable or used. Since then, things have changed: ChatGPT went from 1 to nearly 180 million active monthly users between the end of 2022 and last August. Competing tools have flourished. Filling their gaps has become the priority of tech giants. The latest, more powerful AIs are, moreover, capable of taking into account more contextual elements to formulate their responses. “And the way in which RAG systems will retrieve documents from the bases has improved a lot,” specifies Benoît Sagot. A winning cocktail.

However, education in generative artificial intelligence will also have a key role to play. “Prompt engineering”, the science of properly formulating requests to AI, maximizes the chances of having relevant responses. The more the population becomes familiar with the workings of these mysterious systems, the more they will know what they can ask them with their eyes closed… And what it is better to double-check.