should we be afraid of this new super-powerful AI?

should we be afraid of this new super powerful AI

The crisis that OpenAI is going through with Sam Altman revealed the existence of the secret Q* project: a new generation superpowered AI, capable of reasoning like a human brain, and, ultimately, of “threatening humanity”.

The past few days have been particularly eventful at OpenAI, since the company’s CEO, Sam Altman, was fired by the board of directors on Friday, November 17. A real shock that no one expected and which plunged the firm into turmoil. Without wasting time, Microsoft then offered to hire him and his team to work in a new company dedicated to the development of artificial intelligence within the software giant, of which Sam Altman would be CEO. However, the majority of OpenAI’s 700 employees threatened to resign if he was not reinstated. After tight negotiations, OpenAI announced on November 22 that the company’s co-founder was returning to his position as president and CEO. A real soap opera!

But one question is on everyone’s lips: why? Why separate yourself from the brains of the company – with Greg Brockman –, the one behind the success and revolution of ChatGPT? Many theories were quick to flourish on X (formerly Twitter), some accusing Ilya Sutskever, a board member and co-creator of OpenAI, of being at the origin of the revolt. But according to news agency sources Reutersthe bone of contention would be a mysterious project called Q* (pronounced “Q star”).

OpenAI Q*: general artificial intelligence in development

According to Reuters, researchers sent a letter to the board of directors days before the decision to fire their CEO, warning them of a powerful discovery in artificial intelligence that could “threaten humanity”, due to its extreme power: Q*. According to the agency’s source, this missive and the mysterious artificial intelligence are part of the causes of Sam Altman’s dismissal. The board of directors would indeed be particularly concerned about the too hasty commercialization of super-powerful AI, before really understanding the consequences. Reuters was able to interview two corroborating sources on the subject, but was unable to read the letter in question or receive an official comment from OpenAI.

From what we know, Q* would be close to an AGI, for Artificial General Intelligence. This is an AI that functions similarly to a human brain, which would allow it, in theory at least, to perform the same tasks. So far, Q* is able to solve some elementary school-level math problems, which is frankly impressive. We are talking about an AI that would be capable of learning and understanding more and more things, independently, and not simply “drawing” from the data, as ChatGPT does. If we are already worried about the consequences of the rise of artificial intelligence in the world of work, AGI would completely disrupt our economic model, since nothing would prevent companies from using it rather than their human employees. A few months ago, Ilya Sutskever published a blog post on the notion of “superintelligence”, capable of arising “this decade”. This is described as “the most impactful technology humanity has ever invented”but in return, could also “be very dangerous and lead to the marginalization of humanity or even human extinction”.

Still, this letter, added to the public teasing carried out by Sam Altman the day before his dismissal, would have led the board of directors to get rid of the latter. This information is all the more likely as several members of the council belong to the Effective altruism movement, which campaigns for limiting the capabilities of artificial intelligence. There would be a division of an ethical nature within OpenAI, which would be crystallized by the opposition Sam Altman and Ilya Sutskever. Where Ilya Sutskever wants to move cautiously on artificial intelligence, in order to avoid deviations, Altman on the contrary wants to accelerate, helped by the support of Microsoft.

As some observers point out, this whole drama represents the triumph of the entrepreneur over the scientist, while relaunching the debate on the security of artificial intelligence: are we going too fast? Can these AIs benefit all humanity or are they dangerous? So many questions that currently divide the community. The fact remains that today, Sam Altman is back at the helm of OpenAI with a new board of directors, which is certainly more favorable to his vision of AI than the old one. Enough to encourage governments to urgently regulate the artificial intelligence sector?

ccn1