Cybersecurity researchers have uncovered a significant security breach, revealing that more than 100,000 ChatGPT accounts have been stolen by malicious hackers who subsequently sold the compromised credentials on the dark web. The attack, which exploited malware-infected devices, was brought to light by Group-IB, a leading cybersecurity research firm.
Over the span of one year, approximately 101,000 accounts fell victim to this breach, with the majority of targets located in Asia, accounting for more than 41,000 accounts sold. Meanwhile, around 3,000 accounts in the United States have been affected.
The compromised accounts were obtained through a method known as login-stealing malware, which infiltrates web browsers and extracts sensitive information, including decrypted saved passwords. This method allowed the attackers to acquire ChatGPT logins without directly breaching OpenAI’s systems, which would have been a more challenging task.
The attack employed various malware strains, such as Racoon, Vidar, and Redline, all utilizing similar techniques to pilfer information. It is worth noting that ChatGPT’s standard security measures may also pose a risk to users who input sensitive data into the platform.
To safeguard against such breaches, Group-IB advises users to regularly change their passwords and enable two-factor authentication for their accounts, as reported by TechRadar.
Concerns over data security have prompted companies like Google and Samsung to prohibit engineers from inputting code into generative AI ChatGPT. Storing data on these platforms entails considerable risks, particularly if unauthorized access occurs or if ChatGPT itself becomes compromised. If attackers gain access to chat histories, they could potentially exploit the obtained code to breach key systems once they are operational.
Given these circumstances, it is advisable to ensure that any sensitive data is thoroughly removed from ChatGPT, and users should consider disabling the chat saving feature on their accounts. Despite the platform’s ability to learn from user queries, it is essential to remain mindful of potential security vulnerabilities.