What does ChatGPT’s new AI promise?

What does ChatGPTs new AI promise

OpenAI has just unveiled GPT-4, the new version of the language model that powers ChatGPT, its popular chatbot. Presented as more precise and more reliable, it is even capable of interpreting images!

After months of rumors and speculation, OpenAI formalized on Tuesday, March 14, 2023 GPT-4, the brand new version of its language model, the “engine” behind the revolutionary AI ChatGPT, the conversational robot that has caused so much talk. of him on the Internet since its public release in November 2022 – he also hosts the DALL-E image generator. The company released this new version on Tuesday, March 14 via an update that improves AI capabilities while introducing some pretty promising new features, and subscribers to the paid ChatGPT Plus program can already take advantage of it. “GPT-4 is a great multimedia model, less adept than humans in many real-life scenarios, but as good as humans in many professional and academic contexts”, OpenAI said in a statement. The start-up promises that with GPT-4, its chatbot will become “more creative and collaborative than ever”. And, surprise, Microsoft’s AI in Bing is already based on it! So, based on early previews, is the new version of conversational AI amazing? Does it mark an incredible difference compared to its predecessor? And, above all, does it solve the shortcomings of artificial intelligence and the excesses it entails?

GPT-4: a more powerful language model

As a reminder, GPT – acronym for Generative Pre-trained Transformer, or Approximate French Pre-Trained Generative Transformer – is a generative language based on a neural network model that mimics the human neural system through algorithms. This artificial intelligence system is driven by deep learning – deep learning, in English – by analyzing huge volumes of data – from the Internet in the case of GPT. It is this combination that allows him to generate text by “reasoning” and writing like a human being.

GPT-3, the third generation of this technology, was one of the most advanced AI text generation models to date. The previous versions, GPT-1 and GPT-2, had 1.5 billion parameters, parameters that define the learning process of the AI ​​and structure the results it obtains. The number of parameters in an AI model is typically used as a measure of performance: the more parameters, the more powerful, smooth, and predictable the model. GPT-3 was a real leap forward at this level, as it grew to 175 billion parameters. For GPT-4 on the other hand, OpenAI did not wish to reveal the exact size of its new model.

GPT-4: what are the differences with GPT-3?

GPT-4 takes the basics of GPT-3 and can therefore generate, translate and summarize texts, answer questions, serve as a chatbot and generate content on demand. It brings a promising novelty and many improvements, as explained by OpenAI on his site. Be careful, however, you should not expect a “wow” effect. “In casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle”, explains the company on its website. Also, it looks like the database is still not up to date – it still seems to stop in 2021…

GPT-4: taking images into account

One of the most interesting novelties is that the language model becomes “multimodal”. Indeed, thanks to a collaboration with the start-up Be My Eyes, GPT-4 can analyze and respond to requests containing text and images, where GPT-3 was limited to writing. “It can flexibly accept input that intersperses images and text arbitrarily, much like a documentsums up OpenAI co-founder Greg Brockman at The Guardian. To put it simply, the user can submit an image with a question to the new model. For example, if the user enters the chatbot a handmade sketch detailing a website projectGPT-4 produces a detailed answer explaining the steps to achieve this site – but it still only generates text.

THE New York Times conducted several trials with GPT-4. The journalist submitted a photo of the contents of his refrigerator to the AI, asking him what he could cook with the food present. She was able to offer him several recipes with the available ingredients. Only one of the answers, a wrap, required an ingredient that didn’t seem to be listed there. In another example, a visually impaired person submits a photo to the artificial intelligence of two shirts of the same model, but of different colors, and the AI ​​tells him which one is red. According to OpenAI, GPT-4 is capable of “generate the same level of context and understanding as a human being”, by explaining the world that surrounds the user, by summarizing Web pages drowned in information or by answering questions on what he “sees” for example. This option is not available at the moment and continues to be tested within Be My Eyes, which uses GPT-4 for a visual accessibility product, but should arrive in a few weeks.

GPT-4: more creative and useful AI

According to OpenAI, GPT-4 is “more creative and collaborative” than its predecessor, but also than any other existing AI system. First, the new language model produces faster, more accurate answers, without crashing due to too many simultaneous requests submitted by users (see In addition, the size of the text entered as a query has been increased, since GPT-4 can now analyze texts of up to 25,000 words, compared to around 3,000 words for GPT-3.5. can therefore submit larger texts to it for analysis – a novel, a short story, a scientific article, etc. – which allows the AI ​​to solve more writing or synthesis problems.

OpenAI claims that “GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.” This version of the language model would therefore be better in tasks that require creativity or advanced reasoning. So, during Greg Brokeman’s demo, the company’s co-founder asked him to summarize a section of a blog post using only words beginning with “g”. The AI ​​could be used for tasks like music composition, script writing – books written by ChatGPT in its GPT-3.5 version have already been pouring into the publishing market for a few weeks – and the reproduction of the style of ‘an author.

GPT-4: better test results

According to the results published by OpenAI, GPT-4 has taken an important step in the accuracy of its answers, decreasing the gross errors and illogical reasoning that can be encountered on ChatGPT with GPT-3.5. Indeed, the firm has made its new language model pass tests in biology, law, economics or literature. And GPT-4 significantly outperforms its predecessor, as can be seen in the graph – the results are blue for GPT-3.5 and green for GPT-4.

We note, however, that even if there are clear improvements, the AI ​​still has difficulty with exams that require creativity, such as languages ​​​​and English literature. On the other hand, she passed the bar exam in the United States with a score close to the 10% of the best candidates, where GPT-3 was around the lowest 10%. GPT-4 also obtains very good results in many languages ​​- English is in a way its “mother” language, the one used as a base -, with an accuracy level of 84.1% in Italian, 83.7% in Spanish, or 83.6% in French. These results mean that users will get higher quality answers.

ChatGPT-4: a more secure language model

OpenAI has worked for a long time to make GPT-4 “safer” and avoid its drifts as much as possible. Thus, it would be 82% less likely than GPT-3.5 to respond to requests for unauthorized content, such as coding malware for example. Similarly, its accuracy has been revised upwards, since it is now 40% more likely than the previous version to offer a factual response to a request.

Not all problems are solved! Indeed, the AI ​​always tends to “hallucinate”, by inventing and confidently asserting false information. This is why he recalls that‘”one should be very careful when using the results of a linguistic model, especially in high-stakes contexts”, adding that “GPT-4 poses similar risks to previous models, such as generating harmful advice, malicious code, or inaccurate information”.

OpenAi has already worked with several publishers to create new services and applications integrating GPT-4. This is the case of Duolingo, Be My Eyes, Stripe, Morgan Stanley, Khan Academy or even the government of Iceland. Developers can register on the waiting list to be able to access the company’s API. As for the general public, they have already had the right to a preview of GPT-4 with… the chatbot integrated into Bing by Microsoft! Indeed, when announcing its Prometheus AI, the firm had not precisely indicated on which version of the OpenAI language model it was based, explaining only to use “ChatGPT and GPT-3.5 key learnings and advancements”. Things are now clear with Microsoft’s latest post ! For some researchers and computer scientists, it is moreover the presence of GPT-4 which would have caused the drifts of the AI. As a reminder, many users were able to break the safeguards of the search engine – sometimes involuntarily –, which had led the chatnot to multiply errors and moods, and even insult Internet users in mind-blowing exchanges ( see our article). In too much of a hurry to integrate AI into Bing and pull the rug out from under Google, Microsoft allegedly botched the development of security filters, forcing it to make many adjustments later, deploying updates daily and applying usage limits.

In any case, the Redmond firm intends to reveal more information concerning the integration of GPT-4 into its products on Thursday, March 16, via a conference dedicated to AI in the professional world. Microsoft explains that Bing will benefit from improvements as OpenAI “will bring updates to GPT-4 and beyond”thanks to whom “we will have multimodal models that will offer completely different possibilities, for example videos” – GPT-3.5 is only able to generate content in the form of text, tables and computer code. To the improvements of OpenAI will be added its “own updates based on community feedback”. Hoping that their integration causes less clashes this time!

ccn1