May 14, 2025
Trending News

Microsoft believes it has a solution for AI hallucinations

  • September 26, 2024
  • 0

Microsoft is launching a “correction tool” to correct hallucinations caused by AI systems. Experts are not convinced of its effectiveness. In a blog post, Microsoft introduces Correction, a

Microsoft believes it has a solution for AI hallucinations

Microsoft MWC2024

Microsoft is launching a “correction tool” to correct hallucinations caused by AI systems. Experts are not convinced of its effectiveness.

In a blog post, Microsoft introduces Correction, a tool for solving AI hallucinations. No LLM, no matter how many parameters it counts, is immune to hallucinations, a term used to describe AI errors. Microsoft believes it has found a solution with Correction.

The tool looks for hallucinations in the output of AI systems or for results that are not supported by the connected data sources. If an unjustified sentence is identified, a new correction request is sent to the generative AI model. The LLM then checks the unfounded judgment against the source document, explains Microsoft.

Improve

If the model admits its error, two options are possible. Or, if the erroneous sentence does not contain any content related to the base document, it is removed entirely. If this is the case, the model corrects the sentence. Optionally, you can also ask the model to provide a reason why the original output was incorrect.

Microsoft believes that Correction is a significant breakthrough. Fear of hallucinations is one of the main obstacles to wider adoption of AI. Filtering out false content from the output does not lead to the best user experience, Microsoft claims. Error correction offers a much more effective solution to hallucinations.

One percent

Experts at TechCrunch still doubt Microsoft’s claims. One criticism is that the fix doesn’t get to the core of how hallucinations occur. AI systems hallucinate because they don’t have “knowledge” about something. Models are trained solely to recognize statistical patterns. When you ask ChatGPT or Copilot a question, the model predicts what answer will be given to that question. The model actually doesn’t know if the answer is right or wrong.

In addition, experts wonder how Microsoft can guarantee that Correction itself will not make mistakes. Correction is trained on data sets that may also contain biases. The tool may be able to increase the accuracy of AI systems from 90 to 99 percent, but that one percent error rate will not be easily eliminated. This can give users a false sense of security that their model is 100 percent accurate.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *