May 16, 2025
Trending News

OpenAI admits that AI detectors are not reliable

  • September 11, 2023
  • 0

A week after OpenAI published some tips for teachers on working with ChatGPT, the company has pointed out that detectors designed to debunk AI-generated text are…unreliable. This conclusion

OpenAI admits that AI detectors are not reliable

gpt-5 openAI

A week after OpenAI published some tips for teachers on working with ChatGPT, the company has pointed out that detectors designed to debunk AI-generated text are…unreliable. This conclusion is not really surprising.

Just before the start of the school year, OpenAI published a blog with tips for teachers. CEO Sam Altman’s company gave teachers examples of how to use ChatGPT. Among other things, an important role was given to the students’ critical thinking skills. “The goal is to help them understand the importance of their own primal critical thinking skills,” OpenAI said.

No reliability

In the FAQ section, OpenAI admits that AI detectors can hardly distinguish original texts from AI-generated texts. ArsTechnica already examined so-called AI detectors like GPTZero a few months ago. Such detectors often give false positive results due to unfounded detection methods. It turns out to be easy to get around the tools by rewriting parts of the text.

At the end of July, OpenAI took its own “classifier” offline. This tool would be able to distinguish original texts from AI-written texts. But with an accuracy of almost 26 percent, the application fell short of expectations. However, expectations were high in February when the app launched. According to Altman, the classifier is not dead and buried yet, but the accuracy rate needs to be increased significantly before the application gets a second life.

ChatGPT doesn’t know either

In the FAQ, OpenAI clears up another misunderstanding. ChatGPT is unable to tell the difference between original and AI-generated texts on its own. “ChatGPT has no ‘knowledge’ about what content can be generated by AI,” said OpenAI. When someone runs a prompt to know whether a text is original or not, ChatGPT’s answer has no factual basis. In other words: the speech robot chatters to itself.

With this answer, OpenAI also points out that AI models can provide incorrect information. “Sometimes ChatGPT can sound convincing, but it may contain false or misleading information. These “hallucinations” can, for example, lead to quotations being incorrect.” OpenAI therefore insists on not using texts from ChatGPT and other language models as the only source for research.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *