Experts from the Tow Center for Digital Journalism at Columbia University tested the search engine on a popular chatbot based on artificial intelligence ChatGPT OpenAI company. OpenAI opened access to search function ChatGPT Saying that it can be offered to users in October this year “Fast and up-to-date answers with links to relevant web resources”.
But testing of this tool revealed that quotes from articles were difficult to recognize, even when published by publishers who allowed OpenAI to use their content to train large language models (LLMs).
The authors of the study asked ChatGPT cite sources “two hundred quotes from twenty sources”. Forty of these quotes are from publishers who have banned the OpenAI search engine from accessing their sites. But even in these cases, the chatbot responded confidently, providing false information and, in some cases, admitting that it was unsure of the authenticity of the information provided.
“Overall, ChatGPT gave answers that were partially or completely wrong in 153 cases, but only admitted to failing to give a correct answer on 7 occasions. In these 7 results, the Chatbot only found descriptive words and phrases such as ‘like’, ‘possible’, ‘maybe’ or ‘couldn’t find the original article’ He used expressions like ‘The researchers said in a statement.
In another set of tests, ChatGPT’s search engine produced results that incorrectly matched quotes from the Orlando Sentinel’s letter to the editor with stories in Time magazine. In another example, when the chatbot was asked to quote a New York Times article about endangered whales, it provided a link to a website that copied and republished the original article. Source