Samsung employees are facing trouble after reportedly leaking confidential company information to OpenAI’s ChatGPT on three separate occasions.
This situation highlights the widespread use of the popular AI chatbot for professionals, but also the potential risks of data leakage.
OpenAI’s ChatGPT is a popular AI chatbot used by professionals to improve productivity and find solutions to problems. However, sharing company secrets with ChatGPT is a major risk.
Data submitted to ChatGPT or other consumer services can be used by OpenAI to improve its AI models. OpenAI holds onto that data unless users explicitly choose to opt-out. Sharing sensitive information is not recommended, as OpenAI is “not able to delete specific prompts.”
According to local Korean media reports, a Samsung employee copied the source code from a faulty semiconductor database into ChatGPT and asked for a fix.
In a separate case, an employee shared confidential code to try and find a solution for defective equipment. Another employee submitted an entire meeting to the chatbot and asked it to create meeting minutes.
After learning about the leaks, Samsung tried to control the damage by putting in place an “emergency measure” limiting each employee’s prompt to ChatGPT to 1024 bytes.
This measure was taken just three weeks after Samsung lifted a previous ban on employees using ChatGPT over fears that this issue could occur. Samsung is now developing its own in-house AI.
The problem with sharing company secrets with ChatGPT is that those written queries don’t necessarily disappear when an employee shuts off their computer. OpenAI says it may use data submitted to ChatGPT or other consumer services to improve its AI models.
In other words, OpenAI holds onto that data unless users explicitly choose to opt-out. OpenAI specifically warns users against sharing sensitive information because it is “not able to delete specific prompts.”
Samsung employees aren’t the only ones oversharing with ChatGPT. Recent research conducted by cybersecurity company Cyberhaven found that 3.1% of its customers who used the AI had at one point submitted confidential company data into the system.
Cyberhaven estimates that a company with around 100,000 employees could be sharing confidential data with OpenAI hundreds of times per week.
Other large firms have taken notice of this potential risk. In recent weeks, Amazon and Walmart have reportedly issued notices to employees warning them about sharing sensitive information with the AI mode. Others, like Verizon and J.P. Morgan Chase, have blocked the tool for employees altogether.
As more and more companies adopt AI solutions, it is essential to implement measures that can protect confidential information from being leaked or breached.
As the use of AI chatbots becomes more widespread, companies are starting to realize the potential risks associated with them. Samsung is not the only major corporation to experience issues with confidential data leaking through ChatGPT.
Other companies such as Amazon, Walmart, Verizon, and J.P. Morgan Chase have also taken notice and implemented measures to protect their sensitive information.
Cybersecurity company Cyberhaven recently conducted research that found 3.1% of its customers who used ChatGPT had at some point submitted confidential company data into the system. This number could translate to hundreds of instances of data leakage per week for a company with 100,000 employees.
OpenAI has been clear about its position on user privacy and data protection. The company states that it may use data submitted to ChatGPT or other consumer services to improve its AI models, but it holds onto that data unless users explicitly choose to opt-out. Furthermore, OpenAI warns users against sharing sensitive information because it is “not able to delete specific prompts.”
While the benefits of AI chatbots like ChatGPT are apparent, companies must be cautious when using them to avoid data leakage. It is crucial that employees understand the potential risks of sharing sensitive information with these chatbots and take appropriate measures to protect confidential data.
Companies can also put in place guidelines and policies to regulate the use of AI chatbots and limit the amount of data shared with them. As AI technology continues to evolve, it is likely that more companies will be forced to confront similar issues related to data privacy and protection.
Explore the top 5 best Microsoft Intune alternatives, comparing key features, user reviews, and capabilities…
Discover the top 7 smartphones of 2024 with best security features, offering privacy, performance, and…
Discover the top 11 log management tools for efficient system management and monitoring. Learn about…
Explore the top 5 threat intelligence tools, their features, and how they enhance cybersecurity against…
Explore the top 5 best PAM Tools, market trends, and expert insights to secure the…
Explore the top solutions for Apple Device Management including to iOS Device Management and macOS…