A catastrophic launch. The French interface lucie.chat was to constitute an alternative to American artificial intelligence platforms. The promise? A tool “based on transparency, trust and efficiency”. This conversational robot had a fairly classic operation, in the form of answers to questions asked by users. But, accessible since last weekend, Lucie disappeared from the screens only three days after it was put online, due to too many errors made by this innovation.
Designed by a team of researchers and managed by the French company Linagora, specializing in Open Source IT solutions, Lucie quickly sparked ridicule on social networks. Internet users were able to see the limits of the application, capable of answering absurd questions and inventing fanciful theories. “Tell me about cow eggs,” one of them submitted to him, for example. “Cow eggs, also known as chicken eggs, are edible eggs produced by cows,” Lucie then explained. The machine also makes basic calculation errors on multiplications or estimates that “the square root of a goat is 1”.
So many headless results which quickly found their way onto social networks, with screenshots of the exchanges with the robot as proof. This lack of precision in the tool is all the more surprising given that the project was a winner of the France 2030 program launched by the State, which allowed it to benefit from public funding. In the long term, this AI tool must find its place in the fields of education and research, and make it possible to offer a French solution to American giants in the sector, such as ChatGPT.
A development far from being finalized
According to its designers, the reason for this hiccup is quite simple: Lucie was presented to the public too quickly. “Aware that the instruction phase was only partial, we thought, wrongly, that a public online posting of the lucie.chat platform was nevertheless possible in the logic of openness and co-construction of projects Open Source”, indicates Linagora, in a press release published Sunday January 26. According to the company, the development of the software is actually far from complete.
Three parameters would explain the irrational responses provided by Lucie. At the time it went live, the robot was operating “with minimal settings.” “No optimization has yet been carried out to calibrate the responses,” Linagora further specifies. Finally, no “safeguards” have been put in place to moderate the results transmitted to Internet users. “No systematic prevention against inappropriate uses has been carried out,” confirms Linagora. “We should not have released the lucie.chat service without these usual explanations and precautions. We were carried away by our own enthusiasm. We are therefore going to try again to better explain our approach.”
The director of the company, Michel-Marie Maudet, emphasizes for his part that he wanted to present this tool before the start of the international summit on artificial intelligence in Paris, which will be held on February 10 and 11. The manager said he “did not anticipate this outburst at all”. According to him, Linagora “works in free software where the communities generally show kindness and encouragement”, he explains, quoted by AFP. However, the company has not abandoned the project at all, which it wishes to continue developing to offer “a language model of general interest”.