Ever since artificial intelligence tools like ChatGPT came into our lives, they have been our help. We write songs, we make visuals, and most importantly All the questions we can think of we ask.
So should we trust the answers we get? According to research, not. So much so that the level of lying of some chatbots It’s not unimportant.
Lies in the field of artificial intelligence are referred to in the literature as ‘hallucinations’.

Open AI’s chatbot, ChatGPT, remains the most innocent of them all. ChatGPT said 3% on average, Meta’s Llama bot 5%, The Claude 8%, 27% of Google’s Palm bot It turned out to be a lie.
Google’s in particular is not unimportant. Even if you use artificial intelligence in your business These seemingly small events can lead to bad consequences.
Let’s give a striking example.

According to the New York Times, a lawyer in Manhattan used artificial intelligence when preparing a document to be presented to the judge. artificial intelligence, some presented false cases as if they were real and this situation caused the lawyer to be disgraced.
Therefore, we must remember that we do not accept everything that artificial intelligence says as true, nor do we ourselves accept the reality of the information we obtain from it. from reliable sources We must confirm. Especially if we use it in our work or want to get health-related information.
Since artificial intelligence is still a relatively new field, we may see these kinds of hallucinations, but perhaps in the future Can even detect false information easily We can see the vehicles. For now, let’s continue to confirm what we’ve learned.
More AI content:
Follow Webtekno on X and don’t miss the news