Op-ed: AI deserves a better security conversation than the one we’re having now

Op-ed: AI deserves a better security conversation than the one we're having now

There is a big security conversation right now around the enterprise use of artificial intelligence.

On one level, we should be grateful that it’s happening, because it shows that the message about security is getting through: the best time to consider the security implications of an emerging technology is when it emerges, not later, when voices of concern are raised. they listen Louder and more genuine threats emerge.

The question is, “Is the current conversation about AI safety the ‘right’ one to have right now?”

That seems unlikely.

Like many emerging technologies, the conversation is dominated from the start by a focus on dangers: in this case, the potential for misuse of generative AI or threat actors leveraging AI to strengthen attacks.

In reality, threat actors don’t need to be that sophisticated. There are vulnerabilities much more accessible and exploitable than asking generative AI to help create one. To combat the rise of enterprise AI, security teams don’t need to be as sophisticated either: much can be achieved today, for example, by using network detection and response (NDR) to address the real problems posed by AI, around AI without restrictions. use, while configuring base-level protections for the core of the business.

Address data security issues

There is one realistic threat vector worth discussing about AI now: data security.

This is a conversation that can be informed today by analyzing network-level activity to determine current levels of AI-related traffic. By understanding which users are generating AI traffic, security teams and business unit leaders can have more effective conversations with users about whether or not their usage is in line with the direction set in internal AI policies.

At a time when not even governments have issued formal guidelines regulating the use of generative AI, organizations in general are treading a fine line in trying to encourage experimentation, but with some barriers.

But those barriers can be difficult to enforce. Generative AI lends itself to being used by individuals, outside of any corporate oversight. Its adoption trajectory shares many of the characteristics of past “shadow” IT invasions, such as the initially uncontrolled way in which software as a service made its way into organizations, where business units or teams bypassed central controls. purchase to get access to new tools faster. There are even fewer barriers to entry for generative AI; This time, a corporate credit card may not even be required.

While completely blocking the use of generative AI on the corporate network is a possible course of action (already adopted by government agencies with particularly sensitive data), this does not allow the organization to assess current usage levels or uncover potential use cases. improper.

Some organizations have been able to use NDR to gain visibility into employee use of AI as a service (AIaaS) and generative AI tools, such as OpenAI’s ChatGPT. NDR shows the devices and users on networks that connect to external AIaaS domains, the amount of data that employees share with these services, and in some cases, the type of data and individual files that are shared.

NDR can’t stop the behavior, but it allows security teams to identify who is using AI. From there, they can ask their own questions to determine if the use is approved and, if not, understand what has been done (and potentially what data has been fed into the models) before determining the best way to proceed.

Establish a solid security foundation

NDR is also useful in establishing basic security hygiene to support broader use of generative AI (assuming it is on the agenda) and to mitigate risks that could arise from using the technology.

In any conversation about security risks, it is important to understand what happens in circumstances where a risk breaks through any first line of defense. The first line of defense against AI misuse in these early stages is policy enforcement. The important thing in circumstances like these is to have fundamental security that is able to detect and respond to the risk of policy not being followed.

Importantly, NDR can do that not only for AI-based risks, but also for any risks posed by an organization’s use of any emerging technology.

Knowing what is on the network at all times, what protocols are being communicated, and where traffic is entering and leaving is important to understanding the overall nature of traffic on the network. This helps you recognize, track, and block anomalous patterns or connections.

Additionally, NDR plays a sort of cyber “cleanup” role in organizations, helping with general cleanup of the core network environment: removing old exploitable protocol vulnerabilities, cleartext passwords, and other assets that a threat actor could try to take advantage. use of.

At its core, NDR is a way to establish a solid security foundation that can be augmented with other, more specific layers of technology in the future. That could include using specific security tools to mitigate AI risks, assuming AI-specific risk vectors emerge and those counterattack tools are even available.

Until that happens, the most practical way to deal with the threat of an emerging technology is to focus on what we know works: good security hygiene and network visibility.


Chris Thomas is a Senior Security Advisor at ExtraHop.

Leave a Reply

Your email address will not be published. Required fields are marked *