Aiming to be one of the leading companies in the prolific artificial intelligence race, Google introduced its ChatGPT competitor chatbot Bard a few months ago. Developed over time, this artificial intelligence tool can answer any question you want. The company also announced all the features Bard has to offer at its recent I/O event.
Now, a report shared by Reuters has revealed that Google’s parent company Alphabet has a policy for its employees. The technology giant therefore wants its employees to be wary of chatbots on the market. your own chatbot Including the bar.
Google doesn’t want sensitive information shared with chatbots because of potential leaks

According to the news, citing sources on the subject, Alphabet advises its employees not to share confidential information with models such as OpenAI’s ChatGPT or its own chatbot Bard. The reason for this is leak risk.
The company doesn’t want its employees to enter sensitive information; because users are likely to view this information. On the other hand, chatbots can also use the inputs to train themselves. This poses a separate risk. Google has not made any official statement about this.
Many tech giants ban their employees from using AI tools due to privacy concerns.

It can be said that this is a legitimate concern. Another technology giant, Samsung, announced last month that it had a data breach. This leak came after Samsung employees used ChatGPT. The South Korean company subsequently banned its employees from using ChatGPT.
On the other hand, an Amazon lawyer earlier this year called on employees not to share sensitive information with ChatGPT. Similarly, it was revealed in May that Apple banned its employees from using models such as ChatGPT, Bard, and GitHub’s Copilot tool for privacy reasons. Of course, it was argued that in addition to privacy, Apple’s decision to develop its own artificial intelligence technologies could also have an impact.