May 4, 2025
Trending News

OpenAI doesn’t hit the pause button, on the contrary

  • April 6, 2023
  • 0

By showing its cards, OpenAI hopes to silence the criticism. According to the company, it remains crucial to test systems in practice. Today, many voices are calling for

By showing its cards, OpenAI hopes to silence the criticism. According to the company, it remains crucial to test systems in practice.

Today, many voices are calling for a temporary pause in the development of new AI systems. And last but not least: more than a thousand experts from the academic world have signed an open letter, and well-known personalities from the technology world such as Elon Musk and Steve Wozniak also support this call. A response from OpenAI was, of course, inevitable.

With a blog, OpenAI tries to silence the panic about the security of AI. For example, OpenAI is sometimes accused of bringing experimental technologies to market. An unjustified criticism, according to the AI ​​pioneer, because he tested the new GPT-4 system internally for six months before rolling it out to the public in batches. Thanks to this extensive testing process, ChatGPT is now up to 40% less likely to spread false information and up to 80% less “malicious content”.

OpenAI already announced these statistics when it was first launched and an extensive risk analysis is also publicly available. This paper made us and many experts frown. The paper shows that GPT-4 could theoretically be used to design racist marketing campaigns or even create chemical weapons. So ChatGPT is a filtered version of the language model, but OpenAI isn’t always transparent about how it chains the AI ​​model, and excesses are never entirely avoidable.

protection of people

An interesting paragraph in the OpenAI blog is how the company claims to deal with the protection of people and personal data. Because “think of the children” is a popular argument for the launch of technology, the company says it wants to better control the age of users in the future to prevent minors from using the platform. This is one of the reasons why Italy and other European countries want to ban ChatGPT.

The European countries mentioned also accuse OpenAI of disregarding GDPR laws. The company counters that it only uses publicly available data to train the models and not private data from individuals. We have comments on this.

We tested it ourselves and asked ChatGPT what it knows about our editors. The tool more or less knew what we do in daily life and knowledge gaps were filled with completely made up information. As a user, there is no way to see where ChatGPT is getting this information from, as well as to have erroneous information removed.

chatgpt
ChatGPT does not shy away from making up information about individuals.

No break

Anyway, you can read between the lines of the blog that OpenAI has no intention of hitting the pause button. No matter how extensively you test a technology, to know how it behaves in a natural environment you have to let ordinary people test it, it sounds. Even the big players in the tech industry don’t want to slow down.

Google is pushing its ChatGPT alternative bard on and also Microsoft and Meta continue to like to make AI-related announcements. Amazon is demanding even more generative AI and is opening its doors wide. The AI ​​train has left anyway and regulation, as always, is miles behind.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version