A new paper explores the conditions necessary for consciousness, highlighting important differences between brains and computers, particularly in their causal structures, and suggesting that genuine conscious experience may require more than simulation.
In a new paper, Vanya Wiese investigates what conditions must be met for consciousness to exist, at least one of which cannot be found in a computer.
Would you like AI to develop consciousness? According to Dr. Van Wiese of the Institute of Philosophy II at Ruhr-University Bochum, Germany, this is not entirely true for several reasons. In his article, he examines the conditions that must be met for consciousness to exist and compares brains to computers. He discovered significant differences between humans and machines, especially in the organization of brain regions as well as memory and computational units. “The causal structure may differ with respect to consciousness,” he argues. The article was published in the journal on June 26, 2024. Philosophical Studies.
Two different approaches
There are at least two different approaches to assessing the likelihood of consciousness in artificial systems. One approach asks: How likely are existing AI systems to be conscious, and what needs to be added to existing systems to increase their likelihood? Another approach asks: What types of AI systems are unlikely to be conscious, and how can we rule out the possibility that certain types of systems will become conscious?
Vanya Wiese follows the second approach in her research. “My aim is to contribute to two goals: First, to reduce the risk of unintentionally creating artificial consciousness; this is a desirable outcome, since it is currently unclear under what conditions the creation of artificial consciousness is morally permissible. Second, this approach should help to eliminate the deception perpetrated by so-called conscious AI systems that only appear to be conscious,” she explains. This is particularly important because there are already signs that many people who frequently interact with chatbots are imbuing these systems with consciousness. At the same time, experts agree that current AI systems are not conscious.
Free energy principle
In his article, Wiese asks: How do we know that the conditions for consciousness are not met by, say, ordinary computers? All conscious animals have in common that they are alive. But being alive is such a strict requirement that most people do not think of it as a necessary condition for consciousness. But perhaps some of the conditions necessary for life are also necessary for consciousness?
In her article, Vanya Wiese refers to the free energy principle of the British neuroscientist Carl Friston. The principle states that the processes that ensure the continued existence of a self-organizing system, such as a living organism, can be described as a type of information processing. In humans, these are processes that regulate vital parameters such as body temperature, blood oxygen content, and blood sugar levels. The same type of information processing can be performed on a computer. However, the computer will not regulate temperature or blood sugar levels, it will only simulate these processes.
Most of the differences have nothing to do with consciousness
The researcher suggests that the same situation may apply to consciousness. Assuming that consciousness contributes to the survival of the conscious organism, according to the free energy principle, physiological processes that contribute to the continuity of the organism must preserve a trace that leaves conscious experience and can be defined as information processing process. This can be called “computational association of consciousness”. This can also be implemented on a computer. However, additional conditions may need to be met on the computer in order for the computer to not only simulate but also reproduce conscious experience.
So in his article Vanya Wiese analyzes the differences between how conscious beings implement the computational correlates of consciousness and how a computer implements it in a simulation. He argues that most of these differences have nothing to do with consciousness. For example, unlike an electronic computer, our brain is very energy efficient. But it is counterintuitive that this is a requirement for consciousness.
But another difference lies in the causal structure of computers and brains: In a normal computer, data must always be loaded from memory, then processed in the central processing unit, and finally stored back into memory. In the brain, there is no such division, which means that the cause-effect relationship between different parts of the brain takes a different form. Wanja Wiese argues that this may be a difference between the brain and ordinary computers regarding consciousness.
“As I see it, the perspective offered by the free energy principle is particularly interesting because it allows us to describe the properties of conscious beings in a way that could in principle be realized in artificial systems, but which is not found in artificial systems. Vanya Wiese explains that there are large classes of artificial systems (such as computer simulations). “This means that the preconditions for consciousness in artificial systems can be captured in more detail and precisely.”