Samsung’s semiconductor division relied on ChatGPT to assist in troubleshooting issues in their source code. However, during their interactions with the AI chatbot, they unintentionally shared sensitive information such as the source code for a new program and internal meeting notes regarding their hardware. Unfortunately, this wasn't an isolated event, as three incidents occurred within a month.
In order to prevent similar breaches in the future, Samsung is currently working on developing their own internal AI tool, similar to ChatGPT. This tool will enable their employees to receive prompt assistance while safeguarding the company's confidential information. However, it will only process prompts that are 1024 bytes or less in size.
The problem arose because ChatGPT is managed by a third-party company that uses external servers. Consequently, when Samsung input their code, test sequences, and internal meeting content into the program, it resulted in data leakage. Samsung promptly notified its leadership and employees about the risks associated with compromised confidential information. Until March 11, Samsung prohibited their employees from using the chatbot.
Samsung is not the only company grappling with this issue. Many companies are imposing restrictions on the use of ChatGPT until they establish proper policies regarding the use of generative AI. It's important to note that although ChatGPT offers an opt-out option for data collection, the information provided to the service may still be accessible to its developer.
What is ChatGPT and how does it work?
ChatGPT is an AI application developed by OpenAI that uses deep learning to generate human-like text in response to user prompts. It can produce text in various styles and for diverse purposes, such as writing copy, answering questions, drafting emails, engaging in conversations, translating natural language to code, and explaining code in different programming languages.
ChatGPT functions by predicting the next word in a given text based on the patterns it has learned from a vast amount of training data. By analyzing the context of the prompt and its past learning, ChatGPT generates a response that aims to be as natural and coherent as possible.
What measures has Samsung taken to prevent future leaks?
Following the data leaks caused by ChatGPT, Samsung has implemented several measures to prevent similar incidents from occurring again. Firstly, the company has advised its employees to exercise caution when sharing data with ChatGPT. Additionally, they have imposed restrictions on the length of questions that can be submitted to the service, limiting them to 1024 bytes.
Furthermore, Samsung is currently developing an AI tool similar to ChatGPT but exclusively for internal employee use. This tool will ensure that employees in the fabs receive prompt assistance while safeguarding the company's sensitive information. Until the development of this tool, Samsung had warned its employees about the potential risks associated with utilizing ChatGPT. They have also emphasized that data entered into ChatGPT is transmitted and stored on external servers, making it impossible for the company to maintain control over the data once it is transferred.
What were the consequences for the employees who leaked confidential data?
The available information does not explicitly state the consequences for the Samsung employees who leaked confidential data to ChatGPT. However, Samsung Electronics is taking steps to prevent further leaks of sensitive information through ChatGPT, including placing a limit of 1024 bytes on the size of submitted questions. The company has also cautioned its employees about the potential risks associated with using ChatGPT.
Moreover, Samsung Semiconductor is developing its own AI tool for internal employee use, which will only process prompts that are under 1024 bytes. These actions demonstrate that Samsung is committed to protecting its confidential information and is implementing measures to mitigate the risks associated with the use of ChatGPT.
What type of confidential information was leaked?
Based on available information, it appears that Samsung employees leaked various forms of confidential company information to ChatGPT on three different occasions. This included the source code for a new program, internal meeting notes, and information related to fab performance and yields. One specific incident involved a Samsung Semiconductor employee unintentionally submitting the source code of a proprietary program to ChatGPT in order to resolve errors. This inadvertently exposed the code of a top-secret application to an external AI chatbot maker, OpenAI. As ChatGPT retains the data it receives, this confidential information is now in the possession of OpenAI.
How many times did Samsung employees leak confidential data to ChatGPT?
According to the available information, Samsung employees leaked confidential company information to ChatGPT on at least three occasions.