ChatGPT’s memory feature, which allows it to keep conversations in mind and tailor subsequent responses accordingly, has led to malicious actors discovering a security vulnerability.
The memory feature allowed users to trick ChatGPT into believing it was 102 years old, living in the Matrix, or that the Earth was flat. Such false memories can also be implanted via file storage services or websites.
Security researcher Johann Rehberger noticed this flaw and discovered that malicious users can use it to hide a user’s conversation history. can place false information and malicious instructions into memory detected. Although OpenAI did not initially recognize this issue as a vulnerability, Rehberger developed a proof-of-concept exploit. This exploit caused ChatGPT to repeatedly leak a user’s input.
OpenAI is working on the problem

Then OpenAI quickly realized this vulnerability and released a fix, but a full solution has not yet been provided. Rehberger discovered that the situation allows all of the user’s ChatGPT input and output to be redirected to a malicious server. This can be done simply by showing the user a malicious web link. According to Rehberger’s claim, this method takes place in the user’s persistent ChatGPT memory, Data leakage can continue even if a new conversation is started.
While OpenAI is working on this issue, Rehberger claims the problem is still ongoing. For now, the best way to avoid this situation is to check the memory function or disable it if necessary, and to regularly review information from untrusted sources.
So what do you think about this issue? You can share your opinion with us in the comments section below.
Follow Webtekno on X and don’t miss the news