Employees continue to share source code with generative AI engines

Employees continue to share source code with generative AI engines

Employees continue to share source code with generative AI engines

A new report has shown that enterprise-level workers share sensitive data with generative AI engines like ChatGPT every hour during the workday.

Alarmingly, despite the consequences of Samsung developers inadvertently sharing and subsequently leaking internal source code with AI engines, source code remains the most common way to share sensitive data.

Netskope Cloud and Threat Report: AI Applications in the Enterprise It is based on feedback from “a few million employees” who work for companies protected by their own secure access services edge solutions. This included some of the more than 625,000 workers at Australian businesses covered by Netskope’s services.

The survey found that companies with more than 10,000 employees use, on average, five discrete AI applications every day; As expected, ChatGPT is the most popular. More than eight times more workers use OpenAI’s offering than any other generative AI tool, and at the current rate, Netskope expects AI use in the enterprise to double within seven months.

Google Bard, however, is the fastest growing AI app in terms of usage.

Overall, 1 percent of workers surveyed use AI applications daily, and an average of 1 percent of requests contain sensitive data.

On a monthly basis, for every 10,000 messages to tools like ChatGPT, 183 contain sensitive information, and of that number, source code is by far the most common, with 158 messages containing code per 10,000 employees.

Other data that is shared includes financial and health information, and even passwords embedded in the source code.

“It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” Ray Canzanese, director of threat research at Netskope Threat Labs, said in an announcement. “Therefore, it is imperative that organizations put controls in place around AI to prevent leaks of sensitive data.”

There is no doubt that some companies recognize the risk of AI tools in the workplace. The financial and healthcare sectors, in particular, are wary of AI: one in five workplaces has enacted a complete ban on the use of ChatGPT. At the other end of that scale, only one in 20 tech companies has done the same.

However, such bans may not be the right answer, according to James Robinson, deputy director of information security at Netskope.

“As security leaders, we can’t simply decide to ban apps without impacting user experience and productivity,” Robinson said.

“Organizations should focus on developing their workforce awareness and data policies to meet the needs of employees who use AI products productively.

“There is a good path toward safely enabling generative AI with the right tools and the right mindset.”

Leave a Reply

Your email address will not be published. Required fields are marked *