Faced with the danger posed by ChatGPT, Google is preparing its response with its own artificial intelligence. Created by DeepMind and called Sparrow, it must provide reliable and sourced answers while respecting certain limits.
Many people see ChatGPT, the conversational artificial intelligence developed by OpenAi, as a potential competitor to Google, due to its ability to provide exhaustive, complex and above all unique answers. But even if the tool is far from perfect – it often makes mistakes and spreads fake news, and some have already started using it for malicious purposes – Microsoft has already planned to integrate it with its Bing search engine in order to attack Google on its own ground. No wonder the Mountain View firm, yet a pioneer in the field of AI, is really worried. To the point of launching a “red code” and reorganizing various departments to move forward on its AI projects. One of them, called Sparrow (“sparrow” in French), could well compete with ChatGPT since it takes the form of a fairly similar chatbot. Demis Hassabis, the CEO of DeepMind – a subsidiary of Alphabet, Google’s parent company, specializing in artificial intelligence – revealed in an interview with Time that the firm planned to launch Sparrow in private beta this year.
Google Sparrow: cautious development
DeepMind takes a much more cautious approach than OpenAI so as not to tarnish its reputation – Google has no room for error. In a post featuring Sparrow September 2022, the Alphabet subsidiary described its AI as “a dialogue agent that is helpful and reduces the risk of dangerous and inappropriate responses.” It is based on DeepMind’s Chinchilla language model, which certainly has fewer parameters than OpenAI’s, GPT-3.5, but which has been trained on a large amount of data. In addition, he has Internet access, which allows him to incorporate up-to-date information into his responses.
Sparrow’s slight launch delay compared to ChatGPT is intentional and deemed by the company to be necessary, given that it is working on important features that OpenAI’s AI lacks, such as citing the sources used. to provide the answer. For Demis Hassabis, “it is right to be careful in this area”. DeepMind also wants to establish the limits that its artificial intelligence must not cross. “Our agent is designed to speak with a user, answer questions, and search the internet using Google when it’s useful to find evidence to inform their answers,” DeepMind reported in September. The firm has therefore determined a set of rules to ensure that the “model behavior is safe”, including a ban on making threatening statements, making hateful comments or “pretending” to have a human identity.
Sparrow: Google’s response to ChatGPT?
But then, will Sparrow keep all his promises? In September, testing by the Alphabet subsidiary indicated that artificial intelligence provided plausible, evidence-based answers to factual questions 78% of the time. On the other hand, DeepMind admitted that, for the respect of its rules, the AI still had progress to make since the testers were able to deceive it so that it breaks them 8% of the time. For the firm, crafting better rules for Sparrow”will require both the input of experts on many topics (including policy makers, social scientists and ethicists) and the participation of a wide range of users and stakeholder groups“. We will now have to wait for the private beta to be able to compare Google’s AI to ChatGPT, in terms of response quality and respect for ethics.
But Sparrow is not the only project of the Mountain View firm in terms of artificial intelligence. She is also working on AlphaCode, which is able to code as well as a novice programmer, and LaMDa (Language Model for Dialogue Applications), a conversational AI which was particularly noticed in June 2022, when one of the developers had put together a file in order to prove that the artificial intelligence was conscious – Google takes a lot of precautions in its development, so we are not likely to use it anytime soon.