April 25, 2025
Trending News

Point of view: ChatGPT can’t think

  • May 18, 2023
  • 0

The world was shocked by the rapid pace of development of ChatGPT and other artificial intelligence created with the help of so-called major language models (LLM). These systems

Point of view: ChatGPT can’t think

The world was shocked by the rapid pace of development of ChatGPT and other artificial intelligence created with the help of so-called major language models (LLM). These systems can produce texts that seem to reflect thought, understanding, and even creativity.

But can these systems really think and understand? This question cannot be answered thanks to technological progress, but careful philosophical analysis and reasoning tells us the answer is no. And without working through these philosophical questions, we will never fully understand the dangers and benefits of the AI ​​revolution.

In 1950, the father of modern computing, Alan Turing, published an article describing his method of determining whether a computer is thinking. It is now called the “Turing test”. Turing imagined a person chatting, hidden from view, with two interlocutors: one – a person, the other – a computer. The game is to determine what is what.

If a computer can convince 70% of the judges that they are human in a five-minute speech, the computer has passed the test. Will passing the now seemingly inevitable Turing Test show that AI has achieved intelligence and understanding?

chess challenge

Turing dismissed this question as hopelessly ambiguous and replaced it with a pragmatic definition of “thought”, where thinking is just passing a test. But Turing was wrong when he said that the only clear concept of “understanding” is the purely behavioral concept of passing the exam. While this way of thinking now dominates cognitive science, there is also a distinct everyday notion of “understanding” associated with consciousness. Understanding in this sense means consciously grasping some truth about reality.

In 1997, Deep Blue AI defeated chess grandmaster Garry Kasparov. Based on a purely behavioral concept of understanding, Deep Blue had a knowledge of chess strategy that surpassed any human. But he was not conscious: he had neither feelings nor experiences.

People consciously understand the rules of chess and the rationale behind strategy. Deep Blue, on the other hand, was a responsive machine trained to perform well in the game. Similarly, ChatGPT is an emotionless engine trained on large amounts of human-generated data to create content that looks as if it was written by a human. He deliberately does not understand the meaning of the words he throws. If “thought” means the conscious act of reflection, ChatGPT has no thought about anything.

Payment time

How can I be so sure that ChatGPT is not conscious? In the 1990s, neuroscientist Christoph Koch made a bet with philosopher David Chalmers that over a case of fine wine, scientists would be able to fully establish the “neural correlates of consciousness” within 25 years. By this he meant that they identified the forms of brain activity necessary and sufficient for conscious experience. Since there is no consensus that this is happening, it’s time for Koch to pay.

This is because consciousness cannot be observed by looking inside your head. Neuroscientists must rely on the testimony of their subjects or external markers of consciousness in their attempts to find a link between brain activity and experience. However, there are several ways to interpret the data.

Some scientists believe there is a close connection between consciousness and reflexive cognition – the brain’s ability to access information and use it to make decisions. This leads them to think that the prefrontal cortex of the brain, where high-level information acquisition processes take place, is significantly involved in all conscious experiences. Others dispute this, arguing instead that it occurs in any local area of ​​the brain that does the relevant sensory processing.

Scientists have a good understanding of the basic chemistry of the brain. We’ve also made progress in understanding the higher-level functions of different parts of the brain. But we know almost nothing about what happens between them: how the higher-level functioning of the brain occurs at the cellular level.

People are very excited about the potential of the scans to reveal how the brain works. But fMRI (functional magnetic resonance imaging) has very low resolution: each pixel of a brain scan corresponds to 5.5 million neurons, so there is a limit to the detail these scans can show. I believe that progress in consciousness will come as we better understand how the brain works.

Pause in development

As I discuss in my forthcoming book, Why? Consciousness must have evolved because the “Purpose of the Universe” changed behavior. Conscious systems must behave differently from unconscious systems and therefore survive better. If all behavior were determined by underlying chemistry and physics, natural selection would have no motivation to raise awareness of organisms; We would have evolved as non-susceptible survival mechanisms.

So I’m sure as we learn more about how the brain works, we’ll be able to identify exactly which parts of the brain embody consciousness. This is because these regions will exhibit behavior that cannot be explained by currently known chemistry and physics. Some neuroscientists are already looking for potential new explanations of consciousness to complete the fundamental equations of physics.

While the processing of the LLM is too complex to be fully understood at the moment, we know that in principle it can be predicted from known physics. From this point of view, we can easily say that ChatGPT is not conscious. Artificial intelligence poses many dangers, and I fully support the call for tens of thousands of people, including tech leaders Steve Wozniak and Elon Musk, to suspend development to address security concerns. For example, the potential for fraud is huge. However, it is premature to argue that the descendants of existing artificial intelligence systems will be super-intelligent and therefore pose a serious threat to humanity.

This does not mean that existing AI systems are not dangerous. But if we don’t properly classify a threat, we won’t be able to assess it properly. LLMs are not smart. These are systems that have been trained to create the appearance of human intelligence. Scary but not that bad.

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *