The age of AI has arrived and is generating important debates at various levels.
At the Australian Government level, there are multiple reviews on the responsible development and use of AI. Parallel to these efforts, entire industries and the companies within them are trying to find their own comfortable ground. The rules they are formulating for themselves reflect individual risk appetites and, more often than not, a safety assessment of interaction with AI platforms, particularly those that are freely available.
This level of activity is very justified. One of the risks posed by AI today is uncontrolled proliferation. Without security barriers, their use is evolving much as cloud and software-as-a-service did in their early days: as “shadow” or unauthorized deployments. It will take some time to get this under control. In fact, today organizations should assume that AI is used even in places where its application has been expressly prohibited. It is a technology that is generating great interest and that makes its use almost impossible to monitor or control.
When it comes to the use cases themselves, a key risk that AI poses is the impact it could have by challenging or breaking established methods or norms. This applies to various areas, from ways of working to security.
We are just beginning to see how advanced artificial intelligence and machine learning can be used to penetrate established identity verification methods.
AI promises to make one of the main vectors by which threat actors obtain identity credentials – phishing attacks – much more sophisticated. AI-generated phishing emails are much less likely to have the obvious errors that make most of these emails so easy to spot.
AI also poses risks to emerging methods of identity establishment and verification. The cybersecurity industry has long counted on the emergence of identity controls, such as voice biometrics, to augment or replace passwords. But there is growing evidence that cybercriminals are using artificial intelligence and machine learning technology to bypass advanced identity checks such as voice verification. There have been cases in Australia and overseas where AI-based voice cloning has been used to bypass voice verification systems. While vendors using voice biometrics are creating safeguards to help mitigate this risk, it still boils down to an escalation of the threat posed by cybercriminals.
Finally, if an attacker manages to use AI to bypass a basic identity system and manages to enter a company’s environment, they can also use AI-powered malware to sit inside a system, collect data, and observe user behavior until they are ready. to launch another phase of an attack or extract information it has collected with a relatively low risk of detection.
AI therefore presents multi-layered threats to current identity systems. And if it continues on its current trajectory, operations will soon be forced to radically confront and reevaluate their future identity options and the protections surrounding these identity systems.
Considering how quickly things are evolving with AI, a new approach to protecting digital identity is probably warranted.
A combination of identity threat detection and response (ITDR) and decentralized identity (DCI) practices is emerging as the best way to keep data and identities secure in this new paradigm. By employing this two-pronged approach, users can help manage their own identity data, while organizations empower users by constantly monitoring the IT environment.
A strong response
ITDR helps an organization detect and respond to cyber attacks, while DCI improves security and privacy by reducing reliance on centralized data systems.
ITDR practices carefully monitor the IT network for suspicious and anomalous activity. By focusing on real-time identity signals and understanding permissions, settings, and connections between accounts, ITDR can be proactive in reducing the attack surface while detecting identity threats.
But ITDR is not enough to protect user data in today’s IT environment as a stand-alone solution. This notion is particularly true when it comes to identity and access management (IAM) centralized mass storage practices. In fact, many people see ITDR as an indirect recognition that large organizations should retain people’s data and credentials simply because it is their business to do so.
When protecting data in the age of AI, ITDR fails to keep sensitive information secure. The simple fact is that if you have “detected” something, you already have a problem on your hands. As such, it may be too late at that point to mitigate the risk of loss from the attack.
Since ITDR is more of a reactive approach to IAM, you need a complementary method to keep identity perimeters more secure. To fill this gap, DCI improves security and privacy by reducing an organization’s dependence on centralized data systems. In turn, it better protects people’s information in the event of a breach of a centralized database.
Centralized IAM data warehouses increase the risk of large amounts of data being compromised by an AI-powered cyberattack. With DCI, identity verification relies on providing a cryptographically verified credential rather than providing personal information stored in a centralized IAM database. DCI not only allows people to manage their own digital identities, but these credentials provide a secure, tamper-proof way for people to authenticate themselves. Additionally, the appeal of an attack is greatly reduced, as a breach will likely result in a single individual’s records being compromised, as opposed to the sensitive data of millions of people.
As DCI offers front-line defense alongside ITDR practices, industry-wide IAM best practices are being reviewed and refined, making it much more difficult for cybercriminals to successfully use AI against organizations to execute takeovers. identity and fraud.
Dr. Branden Williams is vice president of identity and access management strategy at Ping Identity.