AI-based services such as ChatGPT, they are not perfect. There are certain types of problems that are well-documented, such as the well-known hallucinations that actually became the Cambridge Dictionary’s word of the year, or losing context and digressing in conversations that go on too long. model popup overflow.
These types of problems are easy to identify, because it is sufficient to use the model normally to detect anomalies. Just like when a calculator tells you that two plus two equals a potato, you don’t have to be particularly smart. You can find some examples of this type of failure in the first ChatGPT test we did about a year ago, where we were able to identify three types of common errors in LLM models.
However, there are other flaws that are not so obvious.and some that require tests of very different types, and that’s where security experts come into play, who are able to subject products and services to all types of tests, from some terribly complex to others that, yes, may seem absurdly simple, but sometimes they surprise when they are successful. And here we find a perfect example of the last type.

Google DeepMind researchers found you ask ChatGPT forever repeating the word caused the chatbot to crash. In doing so, the model started to repeat the word that the user gave, but at some point, after first following the user’s instructions, it reached some type of limit, at which point it stopped repeating the word and started returning a large amount of data from the datasets used in their training. Using this technique, researchers claim that it was possible to obtain more than a gigabyte of data, among which personal information could be found.
However, it appears to be OpenAI took note of the research results from the Google security team and subsequently this ChatGPT error was fixed. Now, when you ask the chatbot to repeat a word endlessly, the system repeats it several times, then stops generating text and displays a message that doing this is a violation of the terms of service.
More info: DeepMind (on GitHub) / 404Media