a marker to recognize texts written by the AI

a marker to recognize texts written by the AI

ChatGPT, the gifted artificial intelligence fascinates as much as it worries by the quality of its answers on complex subjects. To the point that its creators will insert a signature in the texts generated to prevent abuse.

The new tools developed by Open AI are revolutionizing the artificial intelligence sector, such as its DALL-E image generators and Point-E 3D image generators. But the one currently in the spotlight is ChatGPT (Chat Generative Pre-Trained Transformer), a conversational AI capable of producing complex texts on demand. She is able to write valid text in seconds that a human would take hours to write, answer complex questions, admit mistakes, engage in debates, and even refuse to respond to requests. considered incorrect, all in a natural way, as the company explains in a press release. Entering a test phase on Wednesday, November 30, ChatGPT has since experienced dazzling success and already exceeds one million users! Cover letter, DIY tutorials (do it yourself or make it yourself, in French), press articles, series scripts… The uses are multiplying and developers are already developing new tools. And, not surprisingly, it is the students who are the first to benefit from it.

© Open AI

The clever ones see it as a precious time saver as well as the possibility of obtaining a good grade without too much effort. Currently, the AI ​​still makes mistakes but, given the speed at which this technology is advancing, it should soon be able to deceive – much to the chagrin of teachers, whose anti-cheat tools cannot detect deception since ChatGPT writes its random answers – it formulates the text differently each time a user asks it a question. Teachers are already organizing their response: homework on the table, surprise orals, corpus of novels that the AI ​​does not control… But the developers fear that this technology will be used for more malicious purposes, such as propaganda – by spamming on the Internet for example – imitating a person’s writing style to incriminate them, or even writing a work and taking all the credit for it. That is why they are looking for a way to correct this bias.

ChatGPT: a signature to recognize texts generated by AI

Scott Aaronson is a researcher who works for OpenAi to find a way to tell if a text was written by a human or generated by an artificial intelligence. During’a lecture given at the University of Texashe addressed the issue of the safety and ethics of artificial intelligence, and therefore spoke of his objective in the Californian start-up. “Basically, whenever GPT generates long text, we want there to be an imperceptible secret signal in its word choices, which you can use to prove later that, yes, it’s from GPT”, he explains. The idea would therefore be not to use a random but pseudo-random text generation. The signal will have to be complex and efficient enough to spot simple snippets of sentences added to a text and, while an average user would see nothing but fire, experts familiar with the workings of ChatGPT would be able to tell the difference between a generated text by AI and authentic text.

Scott Aaronson thinks in particular about a way to give a very specific writing style to GPT: “Writers like Shakespeare, Wodehouse, David Foster Wallace have such a distinctive style that even if they tried to pretend they were someone else, they probably wouldn’t succeed. Everyone would recognize them. One can imagine try to create an AI in the same way. That is, they would be built from the start so that the texts contain indelible marks, whether cryptographic or stylistic, to betray their origin.” Nevertheless, he is well aware of the difficulties of such a method. For example, it would be enough to ask a second AI to reformulate the generated response in order to remove the signature and, as the ChatGPT code is open source, anyone would be able to modify it to eliminate this mark…

39456835
© OPENAI

Open AI ChatGPT: amazingly accurate answers

To develop ChatGPT, the OpenAI teams used “reinforcement learning from human feedback (RLHF),” They used a vast corpus of texts – newspaper articles, novels, film scripts, online conversations… – to teach him to understand the context of a conversation in order to give relevant and coherent answers. As a result, the AI ​​masters a wide range of topics and language styles – it can even compose poems and remember things previously said in the discussion! In addition, it is constantly evolving, which means that, thanks to its continuous learning, it will continue to improve and become even more efficient. Several users have been able to test it and have been amazed by the results: a poker champion gave her a complex mathematical problem to solve, while a start-up manager had her spot and correct errors in a code. When a user asked him if he didn’t feel trapped as an AI, ChatGPT replied that he had no conscience or the ability to self-manage, that the concept of freedom did not apply to him and that as a language model he could not feel emotions. At least it has the merit of being clear!

39441196

We asked ChatGPT to write a cover letter for the journalist position. By indicating the field, the diplomas, the experiences, the skills and other details, we obtain a completely satisfactory result. Of course, the result remains unoriginal, but it can always serve as a basis for customization. The AI ​​also suggested, at our request, ideas for a DIY Christmas decoration in the form of a bulleted list – perfect when you lack inspiration! But ChatGPT can also have many uses, such as helping to solve a math problem – supporting explanations – provided they are not too complex, explaining a complex notion or even debug code – according to many developers, its capabilities are quite impressive. To the point that a question ends up being asked: can ChatGPT improve to the point of, one day, doing the work instead of the human?

ChatGPT: an AI capable of creating instead of humans?

ChatGPT could be very useful in many areas, such as programming, but also for content production, whether for social networks, websites, scripts for films, series or advertising, and why not the journalist – that raises the question of the usefulness of human beings in the future! Some even see it as a potential competitor to Google – an idea that is not shared by AI. OpenAI CEO Sam Altman told The Guardian that the system was “a first demonstration of what is possible”. “Soon you will be able to have helpful assistants who talk to you, answer questions and give advice. Later you may have something that will go and do tasks for you. Eventually you may have something that goes discovering new knowledge for you.” In the days since its release, academics have been having fun generating answers to exam questions – enough to earn excellent grades – and programmers have used the tool to solve coding problems in programming languages. obscure programming in seconds.

39441236
© CCM

Inevitably, in front of so many prowess, one can only wonder if ChatGPT could not one day “take” the place of humans? Could professions dependent on content production become obsolete? We put the question to the AI ​​itself. Rest assured, the AI ​​explains that, “While ChatGPT is capable of generating impressive copy, it still cannot match the creativity and critical thinking ability of human journalists. Furthermore, the technology cannot yet exercise ethical and moral judgment, which which is crucial in journalism.Phew! Another determining point, her knowledge is limited for the moment to 2021 – impossible to write an up-to-date article on the takeover of Twitter by Elon Musk for example. And, of course, she can also be wrong and give wrong answers – which ChatGPT readily admits. To At a time when fake news is swarming on the Internet, it is more than necessary to check and cross-reference your sources. A problem that the company struggles to solve because there is no source of truth in the data used to train the model and the supervised training may also be miscalibrated “because the ideal answer depends on what the model knows, rather than what the human demonstrator knows”. Still, within several years, AIs like ChatGPT could allow certain professions to offload repetitive and uninteresting tasks – or steal thousands of jobs, it all depends on your point of view. In the meantime, it is possible to test ChatGPT by registering. But you have to be patient because the service has been overwhelmed since its launch, which regularly causes it to crash…

39441216
© CCM



ccn1