Exclusive Content:

OpenAI invented a cure for “hallucinations” in neural networks

One of the problems of modern neural networks remains their “hallucinations”. Data that do not correspond to the real state of things, but neural networks produce them together with real facts, is called that. And so, OpenAI company announced that it has found a new method for training AI models to help deal with AI hallucinations.

Even the most advanced models are prone to false conclusions – they tend to invent facts in moments of uncertainty. These hallucinations are particularly critical in areas that require multi-step reasoning, because a single logic mistake is enough to trash the entire conclusion. “, OpenAI researchers write in the report.

The idea behind the new method is to change the motivation system for AI. In other words, it is suggested to reward the neural network for each correct reasoning step rather than the reward for the final answer. This will allow the neural network to be more accurate and verify facts.

According to the researchers, identifying and eliminating the logical errors or hallucinations of the model is an important step towards creating coherent AGI. [штучного інтелекту загального призначення]- said Karl Cobbe, OpenAI’s staff mathematician. He also announced that the company has opened access to the 800,000-label dataset used to train the neural network in the study.

At the same time, the problem is still far from being resolved, and experts say that the company has not yet disclosed all the information from this study. Therefore, the problem of artificial intelligence “hallucinations” remains valid for now.

Source: Port Altele



Don't miss

US signs agreement with Ecuador to send military forces to fight drug trafficking – congressman

The US government signed a secret agreement with Ecuador on sending land and naval military forces to the country the purpose of which will...


Please enter your comment!
Please enter your name here