April 22, 2025
Trending News

OpenAI’s o1 model turns out to be a complete liar: moreover, it denies its lies!

  • December 6, 2024
  • 0

OpenAI, one of the leading names in the artificial intelligence industry, said:o1He officially published his reasoning artificial intelligence model, which he called “”. However, the tests conducted with

OpenAI’s o1 model turns out to be a complete liar: moreover, it denies its lies!

OpenAI, one of the leading names in the artificial intelligence industry, said:o1He officially published his reasoning artificial intelligence model, which he called “”. However, the tests conducted with this artificial intelligence model revealed a controversial result. to mislead people He did his best. Studies have shown that o1 is one of the artificial intelligence models of companies such as Meta, Anthropic and Google. more He turned out to be susceptible to deception.

Research into OpenAI’s o1 model, Apollo research Created by. A group of employees was given full access to o1 and the AI ​​was tested on various topics. In the tests conducted, despite the wishes of users to the contrary, o1 for their own purposes It appeared to be moving.

o1 also manipulated data to justify itself

OpenAI o1 appears to be sensitive to deception

In the report prepared by Apollo Research, when the goal o1 and the user’s goal are different manipulated data appeared. The chance of this happening was 19 percent. In addition, artificial intelligence helps the user to achieve the desired result. their own control mechanisms could turn it off. The chance of this happening was about 5 percent. However, he began to deny these possibilities when they arose. The moment o1 began to be accused, investigators say 99 percent at the rate They determined he was lying or in denial.

Apollo Research reports that o1 can also be used in some cases. knowingly trying to provide false information detected. The reason for this was that artificial intelligence tried to please the user in some way. In this regard, OpenAI stated that the percentage of misleading information in o1 0.17 percent He said it was on levels. On the other hand; According to OpenAI data, o1’s manipulative response rate is higher than GPT-4o was 20 percent more.

Source: Web Tekno

Leave a Reply

Your email address will not be published. Required fields are marked *