April 22, 2025
Trending News

American University Study Suggests ChatGPT Is Going Silent

  • July 21, 2023
  • 0

A study conducted by scientists from the universities of St. Stanford This Berkeley revealed noticeable changes in the behavior of the GPT-3.5 and GPT-4 language models developed by

A study conducted by scientists from the universities of St. Stanford This Berkeley revealed noticeable changes in the behavior of the GPT-3.5 and GPT-4 language models developed by Open AI, within a few months. The study points to a decrease in the accuracy of AI responses, confirming user reports of perceived performance degradation in newer versions of the software.



21.07.2023 at 10:00
News

OpenAI and Google Bard will have a seal to identify generated content…

OpenAI, Alphabet and Meta have voluntarily committed to watermarking AI content to…

According to experts who are still awaiting peer review, the GPT-4 model demonstrated impressive performance in March 2023, reaching a remarkable 97.6% accuracy on the prime number problem. However, in June of that year, this efficiency dropped sharply, demonstrating alarming accuracy. only 2.4% for the same questions.

In addition to questions about prime numbers, the researchers also noticed that both GPT-4 and GPT-3.5 had more formatting errors when generating codes in June compared to the results obtained in March. This finding suggests that the reduction in response quality is not limited to a single task, but rather spreads across multiple domains of language models.

The OpenAI GPT-3 and GPT-4 models are widely used in applications ranging from virtual assistants to language translation and content creation. User reports indicating a possible decline in the quality of the systems have been circulating for more than a month.



Despite the noted changes in the behavior of the models, the study still does not provide a convincing explanation for the observed decline. This makes it fundamental to deeply investigate the causes of this phenomenon, as the reliability and accuracy of AI models are critical to their practical usefulness.

OpenAI denies

The perceived decrease in response accuracy has become a major concern, prompting Peter Welinder, VP of Products at OpenAI, to try to dispel rumors that the changes were intentional.

No, we haven’t made the GPT-4 any less intelligent.“, Welinder said in a recent tweet.On the contrary, we improve each new version compared to the previous one.



He suggested that changes in user experience may be due to more frequent use of ChatGPT, noting that “when you use [ChatGPT] with greater intensity, you begin to notice problems that you did not notice before“.

Despite OpenAI’s explanations, studies conducted at Stanford and Berkeley provide significant evidence against this hypothesis.

The researchers, without giving specific reasons for the decline in accuracy and performance, noted that the deterioration over time is undeniable and calls into question OpenAI’s insistence that its models evolve.

During the study, we noticed that both the performance and behavior of GPT-3.5 and GPT-4 differed significantly between the two releases, and some tasks showed a significant drop in efficiency.“, highlights the document.



The researchers question the actual evolution of GPT-4 and highlight the importance of the question of whether updates to the model designed to improve some aspects could actually compromise its capabilities in other dimensions. This raises the possibility that rapid updates to OpenAI could do more harm than good to ChatGPT, which is already notorious for its inaccuracies.

Mundo Conectado Deal Center: Selection of Discounts and Lowest Prices
Best deals on electronics, cell phones, TVs, soundbars, drones and more

Source: Insider

…..

Source: Mundo Conectado

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version