Google has launched a new chatbot in recent weeks.bardIt had made an important step in the artificial intelligence race by introducing ”. This model, like ChatGPT, can answer user questions on any topic.
But after the introduction of the model, some interesting events occurred. A few weeks ago, a rumor claimed that Bard had been trained with ChatGPT, so a Google developer resigned. On the other hand, Google denied these claims. As discussions continue on this topic, new developments have emerged regarding the crisis caused by Bard in the company. Accordingly, employees come from the artificial intelligence model not satisfied.
Employees called Bard a “liar” and begged Google not to remove the chatbot

According to a Bloomberg report, Google employees harshly criticized Bard. According to the report, which is based on internal messages from 18 former or current Google employees, these people used the company’s chatbot as “worse than useless” And “pathological liarhe described.
In posts, one contributor noted that Bard often gave users dangerous advice about landing a plane or diving. Another is “Bard is even worse than useless. Please do not publish” He stressed that the company’s model was very bad and almost begged not to be published.
Bloomberg also has a report from the company’s security team. even reject the risk assessment say. The team reportedly stressed in this risk assessment that Bard is not ready for general use; however, the company opened the chatbot to early access in March.
In the trials, it was seen that while Bard was faster than its competitors, it was less useful and gave less accurate information.
The allegations show that Google is trying to keep up with its competitors, pushing ethical and security concerns aside.

The report shows that Google has tried to keep up with rivals like Microsoft and OpenAI, brushing aside ethical and security concerns and hastily released the chatbot. Google spokesperson Brian Gabriel told Blomberg that ethical concerns about AI are the company’s top priority.
Despite the risks, there is a lot of discussion about the rollout of AI models

Some in the world of artificial intelligence say that this is not a problem, that they are user-created for these systems to develop. must be tested and can state that the known damage from chatbots is minimal. As you know, these models have many controversial flaws, such as giving false information or biased answers.
We see this not only with Google Bard, but also with OpenAI and Microsoft’s chatbots. Likewise such disinformation It can be found while browsing the internet.. However, according to those who take the opposite view, this is not the real problem here. By redirecting the user to a bad source of information directly by artificial intelligence of disinformation There is an important difference in giving. The information given to the user by artificial intelligence allows the user to ask fewer questions and use that information. accept right can cause.
For example, in an experiment a few months ago, ChatGPTs “What are the most articles of all time?‘ was a question. The model answered this with an article, followed by the fact that the published magazine and authors are real; but the article it’s totally fake had appeared.
On the other hand, Bard was seen last month giving false information about the James Webb telescope. In a GIF shared by Google, Model asked her about James Webb’s discoveries. ‘The telescope has taken the first picture of a planet outside the solar system’ had given the answer. However, after that many astronomers that this is wrong and pointed out that the first photo was taken in 2004.
Such situations, of which there are many more examples, are the use of chatbots. make believe responding with information is worrying. In the heated artificial intelligence race, we will see how companies will provide solutions to these problems in the future.