April 30, 2025
Trending News

For the first time, a neural network captures a “key aspect of human intelligence”

  • October 27, 2023
  • 0

In a new study, scientists have shown that neural networks can now “think” more like humans than ever before. The study was published in the journal Wednesday, October

For the first time, a neural network captures a “key aspect of human intelligence”

In a new study, scientists have shown that neural networks can now “think” more like humans than ever before. The study was published in the journal Wednesday, October 25. Nature, marks a shift in a decades-long debate in cognitive science, the field that studies which type of computer would best represent the human mind. Since the 1980s, A subset of cognitive scientists argued that:Neural networks, a type of artificial intelligence (AI), are not valid models of the mind because their architectures fail to capture a fundamental feature of human thought.

However, after training, neural networks can now acquire this human ability.

“Our study shows that this important aspect of human intelligence can be achieved in practice using a model that has been rejected as lacking these capabilities,” said study co-author Brenden Lake, an associate professor of psychology and data science in New York. York University told LiveScience.

Neural networks mimic the structure of the human brain to some extent, as information processing nodes are interconnected and data processing occurs at hierarchical levels. However, historically artificial intelligence systems behaved Unlike human minds, they lacked the ability to combine known concepts in new ways; this skill was called “systematic composition.”

For example, if a standard neural network learns the words “jump,” “twice,” and “circle,” Lake explained, it needs to be shown many examples of how those words can be combined into meaningful expressions like “jump.” twice” and “jump in a circle.” But if the system is then fed a new word like “spin,” it will need to go through a series of examples again to learn how to use it in a similar way.

In the new study, Lake and co-author Marco Baroni of Pompeu Fabra University in Barcelona tested both AI models and human volunteers using a fictional language containing words such as “dax” and “wif.” These words corresponded either to colored dots or to a function that somehow changed the order of these dots in the array. Thus, strings of words determined the order in which the colored dots appeared.

So, given a nonsense expression, AI and humans had to figure out the basic “grammar rules” that determine which periods the words fit into.

Human participants produced correct dot sequences approximately 80% of the time. When they failed, they made consistent kinds of errors, such as thinking that a word represented a single dot rather than a function that shuffles the entire sequence of dots.

After testing seven AI models, Lake and Baroni developed a method called compositional meta-learning (MLC), which allows the neural network to practice applying different sets of rules to newly learned words while also providing feedback on whether the rules have been applied correctly.

An MLC-trained neural network matched or outperformed humans on these tests. When the researchers added data on typical human errors, the AI ​​model made the same mistakes as humans.

The authors also compared MLC to two neural network-based models from OpenAI, the company behind ChatGPT, and found that in a point test, both MLC and humans performed much better than the OpenAI models. MLC also coped with additional tasks such as interpretation of written instructions and the meaning of sentences.

“They were pretty good at that task, calculating the meaning of sentences,” said Paul Smolensky, a professor of cognitive science at Johns Hopkins University and senior principal scientist at Microsoft Research who was not involved in the new study. However, the generalization ability of the model was still limited. “He could work with the sentence types he was trained on, but he couldn’t generalize to new sentence types,” Smolensky told Live Science.

Still, “until this paper we were not able to train the network to have full composition,” he said. “I think this is where their newspaper is moving things forward,” despite the current restrictions.

Improving MLC’s ability to demonstrate compositional generalization is an important next step, Smolensky added.

“This is the core feature that makes us smart, so we need to achieve this,” he said. “This study points us in that direction, but it doesn’t overwhelm us.” Source

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *