May 18, 2025
Trending News

A simple logical question confounded even advanced AI

  • June 10, 2024
  • 0

Researchers at the nonprofit AI research organization LAION have shown that even the most complex large language models (LLMs) can be halted with a simple question. In a

A simple logical question confounded even advanced AI

Researchers at the nonprofit AI research organization LAION have shown that even the most complex large language models (LLMs) can be halted with a simple question. In a paper that has not yet been peer-reviewed, the researchers described how they asked various generative AI models the following question: “Alice [X] siblings too [Y] sisters How many sisters does Alice’s brother have?


The answer is not that difficult. For example, Alice has three brothers and two sisters, so each of the brothers has two sisters plus Alice herself. So every brother has three sisters.

Experts tested OpenAI company models: GPT-3, GPT-4 and GPT-4o; Anthropic Claude 3 Opus, Google’s Gemini and Meta’s Llama models, as well as Mistral AI’s Mxtral, Mosaic’s Dbrx and Coher’s Command R+. When the artificial intelligence was asked a question, it clearly did not meet expectations.

Only one model, the new GPT-4o, passed the logic test. Others could not understand that Alice was also the sister of the brothers in the family.


Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *