OpenAI, one of the most heard technology companies recently, ChatGPT He was sued for his lies.
Productive AI-based chatbot ChatGPT, “GPTIt is trained with a language model called ” and this language model does not always give accurate results. In fact, the chat bot can often make mistakes even in some very basic issues, it is absolutely necessary not to blindly believe in the established system. This issue is on the agenda once again today because ChatGPT, Radio host living in Georgia Mark Waltes produced a lot of false / fake information about him. Chatbot accused Walters of defrauding and embezzling a nonprofit but there was no such thing. The system reportedly created this fake information in response to a request from a journalist named Fred Riehl. learning this Mark Walters, He sued OpenAI and that It is reported to be the first libel / defamation lawsuit filed against OpenAI. There are many artificial intelligence experts who have expressed their concerns about chatbots producing false and false information, and developer companies must take very specific and careful steps in this regard. Reputations that took years to build due to a chatbot’s answer can be shattered in minutes, as no one questions anything exactly in today’s internet world. Here is OpenAI’s explanation for people using ChatGPT: “The system may generate incorrect information from time to time…”
YOU MAY BE INTERESTED
Before that, a lawyer living in the USA showed the whole world the state of trust we mentioned in the introduction. of The New York Times to the news Lawyer from law firm Levidow, Levidow and Oberman Steven Schwartzfor help writing a recent legal summary He resorted to OpenAI’s chatbot. ChatGPT assisted the lawyer in this matter, but in doing so produced false information that was not true. This, of course, created a huge problem when it arose. If you have wondered about all the details on this interesting subject here You can go to our news.