More than 100,000 stolen ChatGPT logins are for sale on the dark web

More than 100,000 stolen ChatGPT logins are for sale on the dark web

More than 100,000 stolen ChatGPT logins are for sale on the dark web

Security researchers have discovered the stolen credentials of more than 100,000 ChatGPT accounts for sale on an illicit dark web marketplace.

Cybersecurity firm Group IB identified the stolen accounts within the logs of several popular data-stealing malware, including Racoon (the most popular), Vidar, and RedLine. Data stealers are a common form of malware, capable of collecting banking details, website logins, browsing histories, and more from infected computers.

Compromised ChatGPT accounts have been appearing for sale in increasing numbers since June 2022, when 74 were posted for sale on the dark web. That number quickly grew into the hundreds in the following months, with a staggering 26,802 going on sale in May 2023. The total figure through May is 101,134 logins.

The problem with compromised ChatGPT accounts is that the platform stores a user’s message and reply history. This could include details of software development and corporate communications, as well as other internal business processes. Many criminals also turn to ChatGPT for everything from modifying code to writing phishing and scam messages.

Broken down by region, the majority of accounts come from the Asia-Pacific region, with India topping the tally with 12,632 stolen accounts, followed by Pakistan. The Middle East and Africa is the next highest region, followed by Europe, Latin America, North America and the Commonwealth of Independent States, which make up the rest.

“Many companies are integrating ChatGPT into their operational flow,” he said Dmitry Shestakov, head of threat intelligence at Group-IB, in a blog post. “Employees enter classified correspondence or use the bot to optimize proprietary code.”

“Since ChatGPT’s standard configuration retains all conversations, this could inadvertently provide a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Given the risks posed by the generative AI engine, many companies have banned the use of ChatGPT internally, including Verizon, Apple, and Samsung, among others. Samsung, in particular, enacted the ban after it was discovered that its developers were using ChatGPT to fix bugs in internal code, which in turn made the proprietary code part of the generative AI data set.

“The headquarters is reviewing security measures to create a safe environment for the safe use of generative AI to improve employee productivity and efficiency,” an internal Samsung memo said in May.

“However, until these measures are prepared, we will temporarily restrict the use of generative AI.”

Leave a Reply

Your email address will not be published. Required fields are marked *