Hardly every fifth Belgian employee knows how ChatGPT processes their information, 42 percent even share private company data with the AI chatbot.
According to a new study by Kaspersky, 42 percent of employees in Belgium share company personal data at work using ChatGPT, potentially compromising private data, IP data and sensitive information. Additionally, 44 percent of workers report that there are no policies governing the use of ChatGPT in the workplace and 43 percent are unaware of how data processing works.
Today, addressing privacy and content inspection related to AI tools is a top priority for organizations.
However, when it comes to usage of the content, more than half of employees (57%) say they do not verify its accuracy or reliability before claiming it as their own work. This contrasts with a third of respondents (34%) who review output before using it, even when literally copying/pasting.
There are no rules or guidelines
Almost half of workers (44%) say there is currently a complete lack of rules and guidelines for using generative AI tools like ChatGPT in the workplace. 21 percent say that they do, but that the rules are not clear or comprehensive enough. Additionally, 24 percent believe rules or policies aren’t even necessary, which could lead to ChatGPT being abused due to privacy and transparency concerns.
When asked what exactly the policy entails, 38 percent of respondents say they receive verbal instructions, either at a company meeting (22 percent) or individually (16 percent). Just 18 percent say the rules have been formalized in an official email and just under 15 percent say a stand-alone formal document has been produced, suggesting that the majority of Belgian organizations are not taking the matter seriously enough.
time saving
A quarter of respondents (25%) admit to using ChatGPT for text tasks at work, such as generating content, doing translations, or revising texts. About as many (24%) use it as a time-saving measure, for example to summarize long texts or meeting notes.
Kaspersky warns in its report, “Companies that may benefit from the use of ChatGPT in the workplace and other LLM services should establish detailed policies for employees to govern their use.” By establishing clear policies, employees can prevent both overuse and potential data breaches that would undermine even the strongest cybersecurity strategy.”