How can the human brain compete with artificial intelligence?
January 16, 2024
0
While the brain works efficiently despite its relatively shallow structure with limited layers, modern artificial intelligence systems are characterized by a deep architecture consisting of many layers. This
While the brain works efficiently despite its relatively shallow structure with limited layers, modern artificial intelligence systems are characterized by a deep architecture consisting of many layers. This begs the question: Can brain-engineered shallow architectures compete with the performance of deep architectures, and if so, what are the underlying mechanisms that allow this?
Neural network learning techniques are inspired by how the brain works, but there are fundamental differences between the way the brain learns and how deep learning works. The main difference is the number of layers each uses.
Deep learning systems often have a large number of layers, sometimes reaching hundreds of layers, allowing them to learn complex classification tasks efficiently. In contrast, the human brain has a much simpler structure consisting of fewer layers. Despite its relatively shallow architecture and the slower and noisier nature of its processes, the brain is quite efficient at performing complex classification tasks.
Examination of surface learning mechanisms in the brain
The central question driving new research is the possible mechanism behind efficient surface learning that allows the brain to perform classification tasks with the same accuracy as deep learning. In the article published by Physics AResearchers at Bar-Ilan University in Israel show how such small-scale learning mechanisms can compete with deep learning.
“The brain consists of a broad, shallow architecture rather than a deep architecture like a skyscraper, more like a very large building with several floors,” said Professor Ido Kanter of the Bar-Ilan (Goldschmied) Department of Physics and Gonda. ) Multidisciplinary Brain Research Center, which conducted the study.
“As the architecture gets deeper with more layers, the ability to accurately classify objects increases. In contrast, the brain’s shallow mechanism suggests that a broader network is better at classifying objects,” said Ronit Gross, an undergraduate student and one of the key contributors to the study.
“Wider and taller architectures represent two complementary mechanisms,” he added. However, implementing very large shallow architectures that simulate brain dynamics requires modifying the characteristics of advanced GPU technology, which can accelerate deep architectures but fails to implement large shallow architectures.
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.