Samsung recently introduced its artificial intelligence chatbot ChatGPT to its employees to streamline processes and improve its chip manufacturing business. However, just three weeks later, three sources of sensitive semiconductor information were leaked, raising concerns about data security and privacy breaches.
The leak occurred when Samsung employees entered sensitive information such as semiconductor equipment measurement data and source code into ChatGPT. As a result, this information has become part of the AI training database accessible to anyone who uses ChatGPT, not just Samsung.
The first leak occurred when a Semiconductor and Device Solutions employee entered the source code associated with the Semiconductor Equipment Measurement Database into ChatGPT to find a quick fix. The second leak occurred when another employee entered code related to profitability and optimization, and the third leak occurred when an employee asked ChatGPT to generate meeting minutes.
Samsung has taken steps to prevent further leaks, including instructing its employees to be mindful of the data they share with ChatGPT and limiting the size of each record to a maximum of 1,024 bytes. The company also explained that the information is transferred to external servers, where once entered into the AI chatbot, it cannot be recovered or deleted.
This event highlights the importance of data security and the need for companies to carefully consider the potential risks and benefits of implementing AI chatbots in the workplace. While AI chatbots can increase productivity and streamline processes, they require appropriate security measures and training to ensure the privacy and security of sensitive information.