May 15, 2025
Trending News

Study shows AI poses no threat to human existence

  • August 21, 2024
  • 0

Science fiction is full of examples of AI running amok and turning against its human creators. HAL-9000. The Matrix. Skynet. Glados. The Cylons. Humanity seems to have a


Science fiction is full of examples of AI running amok and turning against its human creators. HAL-9000. The Matrix. Skynet. Glados. The Cylons. Humanity seems to have a deep-seated fear of machine rebellion. With the advent of increasingly complex large language models (LLMs) like Chat GPT, the question of what dangers AI might pose has become even more urgent.


And now we have some good news. According to new research led by computer scientists Iryna Gurevich of the Technical University of Darmstadt in Germany and Harish Taiyar Madabushi of the University of Bath in England, these models are not capable of running out of control.

In fact, their programming is too limited, they cannot learn new skills without training, and therefore remain under human control. This means that while we can use the models for malicious purposes, the LLMs themselves are developmentally safe and not a cause for concern.

“There was a fear that as models got bigger, they would be able to solve new problems that we couldn’t predict, and that these larger models could gain dangerous capabilities, including reasoning and planning,” Taiyar told Madabushi.

“Our research shows that fears that the model will do something completely unexpected, innovative and potentially dangerous are not well founded.”

The sophistication of the Master has increased to impressive levels in the last few years. They can now hold a relatively coherent conversation using text in a way that seems natural and human. They are not perfect because they are not actually a form of intelligence and in many cases they lack the critical skills needed to separate good information from bad. However, they can still convey bad information convincingly.

Recently, some researchers have examined the possibility that so-called emergent abilities develop on their own in LLMs rather than being deliberately coded into the programming. One specific example is a law master who can answer questions about social situations without being specifically trained for those situations.

Observations have shown that Masters become more powerful and can perform more tasks as they scale. It wasn’t clear whether this scaling also included the risk of behavior that we might not be prepared for. So the researchers conducted a study to find out whether such situations actually occur or whether they act in complex ways within the program’s own code.

They tested four different LLM models and assigned them tasks that had previously been identified as urgent. They found no evidence of the development of differentiated thinking or that any of the models could work outside their programs.

In all four models, the ability to follow instructions, memorization, and language ability account for all the skills demonstrated by law school students. There were no deviations. We have nothing to fear as Masters work on their own.

Humans, on the other hand, are less reliable. Our widespread use of AI, which requires more energy and questions everything from copyright to trust to how we prevent our own digital pollution, is becoming a real problem.

“Our results do not mean that AI poses no threat at all,” Gurevich says.

“Rather, we show that the predicted formation of complex thinking skills associated with specific threats is not supported by the evidence and that we can very well control the learning process of the MA. Future research should therefore focus on other risks associated with the models, such as their potential to be used to create fake news.”

The study was published in the Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version