What does ChatGPT’s powerful future AI engine promise?

What does ChatGPTs powerful future AI engine promise

With the success of ChatGPT, all eyes are on GPT-3, the AI ​​language model, and more specifically on its next version, GPT-4, which promises to be even better. To the point of making you fantasize…

ChatGPT artificial intelligence continues to impress the internet with its ability to answer questions comprehensively and naturally and to solve many problems (code, programming, essay, article, etc.). Everyone – including How it Works – talks only about that and can’t help but test the AI ​​to determine its limits, its fields of action and its possible excesses. Faced with the new horizons it opens up, some even begin to imagine what it could give in the future. Indeed, ChatGPT, like the DALL-E image generator, is based on GPT-3, the language model behind the revolutionary AI – of which Microsoft has an exclusive license. This is based on deep learning, a technology that mimics the human neural system through algorithms. Many rumors are currently circulating about the future GPT-4, claiming that it should be deployed in the course of the year 2023 and include no less than 100 trillion parameters – where GPT-3 would have “only” of 175 billion. So what can we really expect from GPT-4?

GPT-4: an engine with 100 trillion parameters, really?

To start from the beginning, GPT stands for Generative Pre-trained Transformer and refers to a deep learning neural network model that is trained on data available on the Internet to create large volumes of machine-generated text. GPT-3 is the third generation of this technology and is one of the most advanced AI text generation models to date. In their early days, GPT-1 and then GPT-2 had 1.5 billion parameters, parameters that define the AI ​​learning process and structure the results it obtains. The number of parameters in an AI model is usually used as a measure of performance: the more parameters, the more powerful, smooth and predictable the model – we’ll get to that. GPT-3 was a real leap forward at this level, as it grew to 175 billion parameters. But what about GPT-4?

In an August 2021 interview with Wired, Andrew Feldman, Founder and CEO of Cerebras, an OpenAI partner company, said that GPT-4 would include about 100 trillion parameters – that’s 100 million million, not 100 trillion as seen in some articles. because the English word “trillion” is a false friend. Inevitably, such a leap forward is something to excite! The information emerged on Twitter and in the media, in particular through graphics. However, these are only rumors, and they do not necessarily go in the direction of the statements of the company. Whether we ask ChatGPT itself – whose knowledge stops at June 2021 –, the fourth version of the language model would also include only 175 billion parameters. As for Sam Altma, the co-founder of OpenAI – the company that developed AI – he said that GPT-4 might not have many more settings than GPT-3. And that’s okay, because the number of parameters is not everything: the architecture, the quantity and the quality of the data also play a major role. We can also expect better algorithms and more precise tuning.

GPT-4: a promising technology that still needs work

Other rumors mention a release of GPT-4 during the year 2023. However, at the risk of disappointing, it should not be ready for several years. Indeed, during a venture capital event that took place on January 13, 2023, Sam Altman, one of the founders and CEO of OpenAI, spoke about the upcoming GPT-4, saying that the new model should not appear until it can be achieved “safely and responsibly.” This isn’t exactly news given that OpenAI has been saying this since GPT-2 – and it’s quite right, considering how much damage AI could do. But he added that the company was going “getting technology out a lot slower than people would like. We’re going to be sitting on it for a lot longer…”, did he declare. The much-awaited rapid launch of GPT-4 therefore does not seem to be for now…

But that doesn’t stop us from dreaming and speculating about the language model’s future capabilities! Of course, it would take up the basics of GPT-3 and could therefore generate, translate and summarize texts, answer questions, serve as a chatbot and generate unique content. It would produce faster and more precise answers, and especially without crashing because of the large number of simultaneous requests submitted by users. She should also update her knowledge, especially in terms of news and the latest discoveries, given that those of GPT-3 stop in June 2021. We are also counting on the enrichment of her language, because the answers are sometimes a little robotic and use a fairly simple grammatical structure and vocabulary. Finally, it is hoped that, by then, OpenAI will have managed to incorporate a signature in its responses in order to distinguish between texts generated by an AI and those written by a human being, but also to solve the problem of misinformation from which ChatGPT suffers. In any case, one thing is certain: GPT-4 looks promising and should be watched closely!



ccn1