I know that in general I tend to be quite critical when I talk about Facebook and Meta. I have no particular animosity towards the social network or the company, although it is true that they have struggled to garner a particularly negative image over the years. However, on this occasion, it is worth clarifying that the security warning of the title is not because we are talking about the project of this particular company… or, well, not entirely.
And that’s it the company launched BlenderBot 3, a universal chatbotat the moment only accessible from the United States (I tried to access through two VPNs, but that was also not possible) and which, at least in its definition, aims to offer both conversations of a general nature, such as can be started at any time on the bar, for example replying to questions that are commonly asked of digital assistants.
Like all LLMs, BlenderBot was trained on large text data sets in order to create formulas that will then be responsible for the responses provided by the AI. Such systems have proven to be extremely flexible and have found a variety of uses, from generating code for programmers to helping authors write their next the best seller. However, these models also have serious problems, such as developing biases from datasets and that when they don’t know the right answer to a question, instead of saying they don’t know, they tend to make up the answer.

And here we can speak positively about Meta because BlenderBot’s goal is to accurately test a possible solution to a problem with invented answers. So, the remarkable feature of this chatbot is that it is able to search for information on the internet to talk about specific topics. Even better, users can click on your answers to see where you got your information from. In other words, BlenderBot 3 can cite its sources, providing tremendous transparency.
That, The meta starting point is a good one, at least in theory. The problem is that chatbots as we know them today have two problems. The first is that his learning is continuous, so it is enough for a large number of users to decide to generate a harmful bias in the AI, so if he does not have the necessary elements to avoid it, it will end up “contaminated”, and therefore play them.
And the second problem is related to the first, and that’s it this type of algorithm works like a closed box and opaque in the sense that it is not known what is going on inside. The people in charge are therefore solely dependent on the constant observation of the AI, they cannot “lift the hood” to see what is going on there, which makes it difficult and delayed to identify problems and in many cases it is impossible for them to solve them.
So this Meta chatbot seems like a step in the right direction, but after so many bad experiences in the past, I admit that I am rather a pessimist.