With the introduction of the artificial intelligence chatbot ChatGPT, the ability to produce human-like text and conversations has become a hot topic in the world. However, there are even those who print their homework or a text they need for academic use into ChatGPT. People talked and debated at length about how to distinguish text written with ChatGPT.
THEY HAVE DEVELOPED TOOLS TO DETECT WITH OVER 99% ACCURACY
According to a study published June 7 in the journal Cell Reports Physical Science, several signs can help us distinguish AI chatbots from humans. Based on these signs, the researchers took an important step and developed a tool to detect academic science writings produced by artificial intelligence with over 99% accuracy.
“DO NOT NEED A COMPUTER SCIENCE DIPLOMA TO CONTRIBUTE”
University of Kansas professor Heather Desaire says they are working hard to create an accessible method. she said.
“There are some pretty glaring problems with AI writing right now,” Desaire said. “One of the biggest problems is that it combines text from many sources and there’s no fact-checking.” he said.
There are many AI text detectors online and they perform quite well. However, they are not specifically designed for academic writing.
To fill this gap, the team aimed to develop a better-performing tool for exactly this purpose. They focused on a type of paper called “perspective,” which provides an overview of specific research topics written by scientists. The team selected 64 “perspective” articles and created 128 articles written by ChatGPT on the same research topics to train the model. When they compared the articles, they found an indicator of artificial intelligence and copywriting: predictability (predictability).
On the other hand, there is an argument that, unlike artificial intelligence, humans have more complex paragraph structures. According to this; The number of sentences and total words per paragraph varies. Punctuation marks and word preferences also give clues. For example, while scientists tend to use words like “however”, “but” and “despite”, ChatGPT frequently uses the words “others” and “researchers” in their articles.
HIGH ACCURACY RATE
When tested, the model achieved 100% accuracy. When it comes to identifying individual paragraphs in the article, the accuracy rate was 92%.
The research team’s model also far outperformed an existing AI text detector on the market in similar tests.
“THE FIRST THING PEOPLE WANT TO KNOW WHEN THEY HEAR RESEARCH…”
“The first thing people want to know when they hear about research, Desaire says, is the answer to the question, ‘Can I use this to find out if my students are actually writing their papers?
“DESIGNED FOR THIS”
While the model is quite capable of distinguishing between AI and scientists, Desaire says the model is not designed to capture AI-generated student papers for educators. However, he notes that people can easily copy his methods to create models for their own purposes.
The team wants to test the model on larger datasets and across different types of academic science writing. As AI chatbots evolve and become more sophisticated, researchers also want to know if their model will survive.