May 5, 2025
Trending News

OpenAI sued for the invention of ChatGPT

  • June 9, 2023
  • 0

Ever since generative AI models like ChatGPT, DALL-E, Bing et al started gaining popularity. we were able to verify how problematic hallucinations can be these systems. This issue

OpenAI sued for the invention of ChatGPT

Ever since generative AI models like ChatGPT, DALL-E, Bing et al started gaining popularity. we were able to verify how problematic hallucinations can be these systems. This issue is well documented, there is a collection of best practices that substantially reduce the risk of their occurrence, and due to the impact they have on the reliability of these AIs, extensive research continues to this day to address the issue.

If you are not clear about the concept of hallucinations in generative AI models, in addition to the tutorial I linked earlier, you can see two specific cases. On the one hand, in this first ChatGPT test we conducted, Miguel Hernández’s fictional poem is a great example of this. And much more recently, you have the case of a lawyer who got into serious trouble for trusting the text generated by this chatbot.

This explains why OpenAi has configured the service so that when we access ChatGPT, it will show us (in English) the following text:

«Although we have security measures in place, the system may occasionally generate incorrect or misleading information and create offensive or biased content. It is not intended to provide advice.»

In general, users of this type of service are aware of this problem, which it is even more pronounced in those in which the sources used are not mentioned according to the model to answer. This is what sets Bing apart from many of its competitors, and that’s that when you type in a query, the answer isn’t always correct, but because it lists the sources used, you can (and should) check them and confirm the source listed. information.

This can be a goldmine for some in a country so prone to lawsuits for virtually any cause. So as we can read in The Verge, a radio host decided to sue OpenAI after ChatGPT generated fake text about himin which he was accused of fraud and embezzlement of funds. The plaintiff, Mark Walters, was accused by AI text in response to a third-party inquiry and said he believed he had embezzled up to $5 million from the nonprofit.

OpenAI sued for the invention of ChatGPT

LA journalist named Fred Riehl got the answer when he asked ChatGPT to inspect a PDF (an action you cannot perform) and generate a summary of its contents. ChatGPT responded by creating a fake case summary that was detailed and convincing, but wrong on several counts, including allegations of fraud. However, Riehl never published this answer, so it is not known how it came to be in Walters’ hands.

This is undoubtedly a somewhat complicated situation, because on the one hand we have service that expressly states that it may generate incorrect or misleading information, which some will interpret as a disclaimer. However, on the other side of the scale, we have a person who could be affected by a chatbot making up misleading information about them, and that such false information could be extremely negative for their image.

Legal experts consulted by the newspaper confirm that the case is unlikely to last long and that OpenAI is likely to be acquitted of Walters’ allegations. However, it makes sense to try to put yourself in the shoes of this radio hostand imagine how we would feel if we found out that an AI like ChatGPT is generating false and also negative information about us. It certainly shouldn’t be a pleasant situation, and perhaps it makes a lot of sense to force OpenAI to redouble its efforts when it comes to preventing this type of defamation of individuals and entities.

Source: Muy Computer

Leave a Reply

Your email address will not be published. Required fields are marked *