Do you use Chatgpt? Beware of what you tell him because some information you entrust can turn against you. Here is everything you should never reveal to artificial intelligence.
Chatgpt has gradually made its nest in our daily lives, to the point that we use it without realizing it. Whether it is to get help on a duty, look for information, ask for medical advice, organize an evening or even talk about our moods or learn to seduce, the chatbot has become an essential tool. But all this aid only comes at the cost of a lot of data that we deliver to Openai without the slightest resistance.
Because yes, AI and confidentiality is a more than complicated relationship. And whatever we know, we rarely be aware how much they are data vacuum cleaners. And if Chatgpt is far from being the worst chatbot in the matter – this honor returns to Metaai – there is information that it is better to avoid entrusting it, because it can be stored, analyzed, exposed and even hacked – especially with the “advanced memory” function, which allows it to draw from old conversations in order to remember the user preferences.
So, it may seem logical for some, but it is important not to entrust your identifiers and passwords to chatgpt, because it is not designed to manage this type of information in a secure manner. Ditto for banking information, such as credit cards, Iban and other safety codes, which can easily appear in an email from the bank that would be re-read to the chatbot. It has already happened that personal data entered by a user accidentally reappear in the response given to another, it would be unfortunate if it was the case with them.
Many can be tempted to ask Chatgpt to play the doctor or interpret analysis results, but it is better to think twice before entrusting his medical data! The chatbot is not subject to the same obligations of professional secrecy as doctors or health establishments. Such information could be used for advertising targeting or, in the worst case, discrimination.
It is also better to avoid disclosing too much personal information, such as its name, address and telephone number. These elements may seem harmless but, accumulated, they can formally identify a user, by creating an usable “digital imprint”. This is obviously worth for the social security number and other information found on an identity card or a passport.
If some do not hesitate to make the Chatbot confidant, it is better not to reveal too many intimate and personal details because of the risks of accidental disclosure. Indeed, political or religious opinions, relational or emotional distress problems can be revealed if the conversation newspapers are hacked or poorly managed.
You also have to be careful with the confidential elements linked to your work or your business. Using Chatgpt for professional tasks is tempting, but it is better to keep entering strategic data, commercial secrets or confidential internal information, which could be saved and potentially used to train future models. Samsung unfortunately experienced an incident of this type, when its source code had leaked after a simple interaction with AI.
Obviously, these precautions do not apply to Chatgpt, but also to all other popular popular artificial intelligences, such as Google Gemini, Microsoft Copilot, Claude, Meta Ai, Perplexity or Deepseek. Be careful not to forget their true nature!