Be very careful with OpenAI GPT
- November 29, 2023
- 0
It seems like a lie but it’s not even a month since OpenAI introduced GPT . And yes, I’m aware that as a general rule, when I use
It seems like a lie but it’s not even a month since OpenAI introduced GPT . And yes, I’m aware that as a general rule, when I use
It seems like a lie but it’s not even a month since OpenAI introduced GPT . And yes, I’m aware that as a general rule, when I use “seems untrue” in relation to the passage of time, I usually do so by pointing out that a lot more has passed than it seems, but this time the opposite is happening in as a result of the stormy adventures of the company’s management. And as you may recall, its CEO, Sam Altman, was suddenly and surprisingly fired, later hired by Microsoft and, after a spectacular revolt by many of the tech company’s employees, rehired by the board that fired him. days before. If you haven’t experienced the express telenovela live, here we will tell you everything.
Be that as it may, the waters have returned to their course and even if some blood has entered the river, normalcy seems to reign again (at least from the outside), although in this case and unfortunately for them normalcy is not synonymous with calmbecause its products and services continue to be subject to intense scrutiny by both authorities and regulators, as well as independent researchers to find and detect potential problems.
Well, as we can read in Wired, newly released GPT tags can be much more indiscreet than they should besomething that substantially violates the privacy of their creators, or more precisely, the privacy of the data that was used for a specific training process to personalize these specialized ChatGPT replicas.
As we already told you when they were announced, GPT are chatbots based on ChatGPT, but with the particularity that they can be adapted by responsible personswhich can both add datasets and parameterize the response to prompts. And the problem, of course, is that the users of these personalized chatbots can resort to techniques that will result in the response revealing the specific data used to personalize them.
As we can read in the mentioned article, «Privacy concerns arising from file leaks should be taken seriouslysays Jiahao Yu, a computer science researcher at Northwestern University. «Although they do not contain confidential information, they may contain some knowledge that the designer does not want to share with others, and [que sirve] as a core part of GPT’s own brand«.
According to research conducted by a team of researchers from the said university, more than 200 GPTs were evaluated and what was discovered could not be more disturbing because They describe it as “surprisingly easy” to access said data, even for users without advanced knowledge of rapid creation. OpenAI will no doubt need to implement measures, and quickly, to address this issue, and in the meantime, anyone who has created their own GPT tokens should review the data provided to them and, if necessary, disable them until they have proof that these issues have been fixed.
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.