A simple logical question confounded even advanced AI
- June 10, 2024
- 0
Researchers at the nonprofit AI research organization LAION have shown that even the most complex large language models (LLMs) can be halted with a simple question. In a
Researchers at the nonprofit AI research organization LAION have shown that even the most complex large language models (LLMs) can be halted with a simple question. In a
Researchers at the nonprofit AI research organization LAION have shown that even the most complex large language models (LLMs) can be halted with a simple question. In a paper that has not yet been peer-reviewed, the researchers described how they asked various generative AI models the following question: “Alice [X] siblings too [Y] sisters How many sisters does Alice’s brother have?
The answer is not that difficult. For example, Alice has three brothers and two sisters, so each of the brothers has two sisters plus Alice herself. So every brother has three sisters.
Experts tested OpenAI company models: GPT-3, GPT-4 and GPT-4o; Anthropic Claude 3 Opus, Google’s Gemini and Meta’s Llama models, as well as Mistral AI’s Mxtral, Mosaic’s Dbrx and Coher’s Command R+. When the artificial intelligence was asked a question, it clearly did not meet expectations.
Only one model, the new GPT-4o, passed the logic test. Others could not understand that Alice was also the sister of the brothers in the family.
Source: Port Altele
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.