I was somewhat surprised to read that some international media are publishing it with fanfare Google has warned its workers about the potential risks of chatbots based on generative artificial intelligence models, including of course his own service, Bard. Unlike other occasions in which I focus on the position of some media, in this case it is completely true and the origin of the information can be found at Reuters.
According to several internal sources cited by the memo, there are two recommendations that Google has given to its employees. The first is that they do not use confidential information in their conversations With chatbots, let’s talk about Bard, ChatGPT, Bing, Poe, etc. And the second one is if they use Bard to generate programming code, never use it directlythat is, they review and, if necessary, modify and correct the code generated by the Google chatbot.
“Google tells its employees not to use Bard”, “Google does not trust its own chatbot”, “Google advises its employees not to use the service it offers to the rest of the world” … in short the collection of biased headlines makes you cry. In fact, and as a general rule Reuters does a great job, this article begins with the text «Alphabet Inc is warning its employees against using chatbots, including its own Bard, while promoting the program around the world.«.

That chatbots can give wrong answers is nothing new, and that there is a risk that they will be reproduced in other conversations, a posteriori, which we previously discussed with these services, is also well known. In addition, all companies that offer this type of service, or at least the main ones, always warn all users about them. In other words, the warning issued by Google to its workers is very similar to the one received by every user of the service.
The fact that a tech company like Google encourages users to use a service like Bard is perfectly compatible with informing and warning about its imperfections, and that he extends this recommendation to his own workers is… something normal. So normal that I’m personally somewhat surprised that Reuters would consider it newsworthy. What doesn’t surprise me, yes, is that the reaction of some media has been to find the most twisted formulas to share this news.
Sure, if that’s our intention, we’ll find aspects that can be improved in relation to how Google manages its AI-based services, but of course,In the absence of prudence, rigor and ethics, they are, at least from what we have been able to see, completely out of place.. What’s more, I think that if they’ve sinned at all, it’s that they’ve been scrupulously prudent in a world that’s moving very, very fast, and where some competitors have taken advantage of the circumstances to try to get ahead.