New possibilities have been introduced for ChatGPT that increase data security. These steps came after banning steps in Europe.
For the hijackers, Italy’s data protection agency has stated that OpenAI’s data processing methods violate the European Union’s General Data Protection Regulation (GDPR). Temporarily blocked the ChatGPT system. The regulator requires OpenAI to collect data from users in bulk to train the GPT language model that underlies ChatGPT. states that it has no legal basis. The institution also supports OpenAI’s He also says he doesn’t do enough to protect children.. OpenAI clearly states that ChatGPT is for those over 13, but it will prevent children from seeing inappropriate answers, according to officials. there is no age control mechanism. Here is an important and necessary step after all these accusations. the thrower OpenAI is now on ChatGPT allows people to disable their chat history. When you turn off the chat history, that is, prevent it from being recorded, you will be able to save the information obtained from the conversation you had with the chat bot. “GPT” You also turn off the use of the language model in the educational process. Also for professionals who need more control over their data It is also reported that ChatGPT Business subscription is being worked on. This enterprise package, which will be released in the coming months, will not use user data to train GPT (off by default) information is given.
YOU MAY BE INTERESTED
OpenAI, before that, was a “Bug Bounty Program“It came to the fore with its launch. Here Within the scope of the program, the company is taking a very important step to make its systems more secure. Security experts who have vulnerabilities in OpenAI’s systems within the scope of the program, Will be able to receive prize money between 200 and 20 thousand dollars. In the statement made by the company in this regard, “OpenAI’s mission is to create AI systems that benefit everyone. To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we recognize that vulnerabilities and flaws can arise.
We believe transparency and collaboration are crucial in addressing this situation. That’s why we invite the global community of security researchers, ethical hackers and tech geeks to help us detect and fix vulnerabilities in our systems.” it was said. The company is following a really rational and logical path in this regard. Because technological systems (especially artificial intelligence) are now really very complex, so even those who produce the system unknowingly reveal security vulnerabilities that only others can find. can take off.