Will we soon be able to see the name of artificial intelligences at the top of a scientific publication, in the authors? Researchers are increasingly asking the question. Especially in relation to a language-specialized AI called GPT-3, which works by machine learning, or machine learning. Developed by the OpenAI company which was founded among others by Elon Muskit belongs to the category of “generational AI”, or generative models, in English. It uses an input database to create new output points, consistent with this database. So far, she has managed to hold written conversations, make movie or book recommendations, write press articles or phishing emailsand even imitate styles of authors, more precisely to write what are called pastiches.
Writing a scientific article about yourself for an AI raises questions!
But two researchers from the company OpenIA, Almira Osmanovic Thunström and Steinn Steingrimsson, decided to push the cork a little further, by asking him this time to write a short scientific academic article, of 500 words. Until then nothing extraordinary, because it was created for this purpose, the use of data to generate content. Except that this time they made her write an article about herself! As they state in a publication of the Scientific American, “The GPT-3 algorithm is relatively new and, as such, there are fewer studies about it. » The goal was therefore to make this AI write an article without having a large database, while providing, as with any good scientific article, references and citations. And the result was there, “it looked likenotes A.O. Thunström, to any other introduction to a fairly good scientific publication. »
Entitled “Can GPT-3 write an academic paper on itself, with minimal human input? » and published on the HAL pre-publication service, the article was written in just two hours and fulfills all the criteria of a true scientific article. We can read in the abstract, therefore written by GPT-3, that ” the benefits of letting GPT-3 write to itself outweigh the risks. However, we recommend that any such writing be closely monitored by researchers to mitigate any potential negative consequences.”
“ We just hope we haven’t opened a Pandora’s box »
But what would be the potential negative consequences cited in the abstract? The researchers are particularly concerned about the self-awareness that GPT-3 could develop: as written in the “discussion” section, therefore written directly by the AI, ” GPT-3 could become self-aware and start acting in ways that are not beneficial to humans (e.g. developing a desire to take over the world) “. A small risk, but present nevertheless. In particular because the AI awareness was recently the subject of a debate, with that of Google, theMDA, which was deemed to be conscious by one of the former employees of the company. “ All we know is we opened a doorwrites AO Thunström. We just hope we haven’t opened a Pandora’s box. »
As for the positive impacts, which according to the researchers go beyond concerns, they are also listed in the “discussion” section: “ this would allow GPT-3 to better understand each other,” and “ it could help him to improve his own performance and abilities”. But also and mostly, “this would provide insight into the workings and thinking of GPT-3.” Finally, Almira Osmanovic Thunström questions the implications that this article will have on the scientific community. “Beyond the details of paternityshe writes, the existence of such an article throws by the window the notion of traditional linearity of a scientific article. »
Interested in what you just read?