Lawyer using ChatGPT in legal process got in trouble

Lawyer using ChatGPT in legal process got in trouble


As part of a legal process in the USA OpenAI signed chat bot ChatGPT The lawyer who used it got into trouble.

Productive AI-based chatbot ChatGPT, GPTIt is trained with a language model called ” and this language model does not always give accurate results. In fact, the chat bot can often make mistakes even in some very basic issues, it is absolutely necessary not to blindly believe in the established system. A lawyer living in the USA showed this situation very clearly to the whole world. of The New York Times to the news Lawyer from law firm Levidow, Levidow and Oberman Steven Schwartzfor help writing a recent legal summary He resorted to OpenAI’s chatbot. Schwartz’s firm claims he was injured during a flight to New York’s John F. Kennedy International Airport. Roberto Mata sued Colombian airline Avianca on behalf of. The airline thought it was right and asked a federal judge to dismiss the case, and that’s where ChatGPT got involved. Mata’s lawyers, prepared a 10-page summary to show why the case should continue. In this summary, “Varghese v. China Southern Airlines”, “Martinez v. Delta Airlines” and “Miller v. of United Airlines including Numerous court decisions cited. However, these it wasn’t real because ChatGPT made it all up. The lawyer who admitted that he received help from ChatGPT in this matter and funnyly stated that he asked the chatbot to confirm the accuracy of the decisions. Steven Schwartz He said that he was unaware that ChatGPT could provide false information, that he deeply regretted using ChatGPT, and that he would never do so in the future without conclusively verifying its authenticity. The judge presiding over the case is that Schwartz’s actions are due to the unprecedented situation created. decided to hold a hearing on 8 June to discuss any necessary possible sanctions.

YOU MAY BE INTERESTED

ChatGPT, It is not the first time that legal issues have come to the fore. For example, taking a step at the beginning of the month, Chinese authorities killed a person living in Gansu province in northern China. ChatGPT system (Banned in the country) using for allegedly writing fake news articles arrested. This move is based on China’s artificial intelligence services. “incorrect information” It is alleged that he was one of the first arrests under the rules he put into effect to prevent his sharing. The person, who is reported to have produced news articles describing a fatal train accident using OpenAI’s chat bot, makes completely false posts, according to Chinese officials, and misleads the society with multiple versions he produces. The detained Chinese citizen, on the other hand, stated in his defense that he used ChatGPT only to rewrite different news articles that went viral and to try to monetize the page traffic. Similar events are expected to occur not only in China, but also in many other countries. Because ChatGPT and similar systems, the risks posed by the production of false content are frequently on the agenda, and concerns in this area are increasing.

lgct-tech-game