May 11, 2025
Trending News

New research points to fundamental flaws in artificial intelligence

  • January 25, 2024
  • 0

ChatGPT and similar machine learning-based technologies are gaining ground. But even the most advanced algorithms face limitations. Researchers from the University of Copenhagen have made a groundbreaking discovery


ChatGPT and similar machine learning-based technologies are gaining ground. But even the most advanced algorithms face limitations. Researchers from the University of Copenhagen have made a groundbreaking discovery that shows mathematically that it is impossible to develop AI algorithms that are always stable except for fundamental problems. This research could pave the way for improved testing protocols for algorithms by highlighting the inherent differences between machine processing and human intelligence.

A scientific paper describing the result has been approved for publication at one of the leading international conferences on theoretical computer science.

Machines interpret medical scans more accurately than doctors, translate foreign languages, and will soon be able to drive safer than humans. But even the best algorithms have flaws. A research group from the Faculty of Computer Science at the University of Copenhagen is trying to uncover these.

Take, for example, an automated vehicle that reads road signs. If someone puts a sticker on the sign, it will not distract the human driver. However, the machine can easily be delayed because the cue is now different from the cues on which it was trained.

“We want the algorithms to be stable in the sense that if the input data is slightly changed, the output data remains almost the same. While real life contains all kinds of noise that humans are used to ignoring, machines can get confused,” says team leader Professor Amir Yehudayoff.

A language to discuss flaws

In a world first, the group, together with researchers from other countries, proved mathematically that it is impossible to create machine learning algorithms that will always be stable for all but simple tasks. A scientific paper describing the result has been approved for publication at Foundations of Computer Science (FOCS), one of the leading international conferences in theoretical computing.

“I would like to point out that we are not working directly on autonomous vehicle applications. But this seems to be such a difficult problem that the algorithms cannot always be stable,” says Amir Yehudayoff, adding that this does not mean that it will have serious consequences for the development of automated cars:

“If the algorithm is wrong in only a few rare cases, that’s acceptable. But if it happens under a wide range of circumstances, that’s bad news.”

A scientific paper cannot be used by industry to identify errors in their algorithms. The professor explains that this was not the intention:

“We are developing a language to discuss weaknesses in machine learning algorithms. This could lead to the development of guidelines explaining how algorithms should be tested. In the long run, this could again lead to the development of better, more stable algorithms.”

From intuition to mathematics

One possible application would be to test algorithms to protect digital privacy.

“Some companies may claim to have developed a completely secure privacy solution. First, our methodology can help identify whether a solution may be completely secure. Second, it will be able to accurately detect vulnerabilities,” says Chief Yehudayoff.

But above all, a scientific article contributes to a theory. He adds that the mathematical content is particularly innovative: “We intuitively understand that a stable algorithm should perform almost as well when exposed to a small amount of input noise. Like a road sign with stickers. But as theoretical scientists, we need a clear definition. We need to be able to describe the problem in the language of mathematics.” “If we want to consider the algorithm stable, exactly how much noise should the algorithm withstand and how close should the result be to the original? We have provided an answer to this.”

It is important to remember the limitations

The scientific paper generated great interest among colleagues from the theoretical world of computer science, but little interest from the technology industry. At least not yet.

“You should always expect some lag between a new theoretical development and the interest of people working in applications,” says Amir Yehudayoff, adding with a smile: “And some theoretical developments will remain unnoticed forever.”

But he doesn’t think that will happen in this case: “Machine learning continues to advance rapidly, and it’s important to remember that even solutions that are very successful in the real world still have limitations. Sometimes machines may seem like they can think, but ultimately they do not have human intelligence. It’s important to keep that in mind.”


Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version