It’s that time of year again when we cast our collective gaze as an industry forward and pontificate about what might come next.
Here, we’ve brought together leading industry experts to reflect on where artificial intelligence can take us: is it a blessing or a threat on the horizon?
Michael ArmerChief Information Security Officer at RingCentral
A year of AI governance
AI adoption is occurring at a dizzying pace. Companies are under immense pressure to identify innovative ways to leverage AI and create differentiation. The reason is simple: if they don’t do it, their competitors will. I don’t see the rush to implement AI slowing down anytime soon, but to mitigate the risk of uncontrolled AI, I think leadership teams will start to implement some controls around its adoption. Over the next year, AI governance will begin to catch up with AI deployments as companies establish and build institutional and legal structures around the use of AI.
Thomas Fikentscher, CyberArk ANZ Regional Director
AI: friend or foe?
AI has remained a prominent topic in mainstream debates for quite some time, often seen as a productivity tool. However, in the context of cybersecurity, both offensive and defensive, while it has been at the center of all discussions for the last year, we find ourselves grappling with a critical question: how do we effectively leverage AI within products? , or is it like that? A necessary phase of frustration that we must go through before its value comes to the fore?
While use cases for new technologies are leading, security is lagging behind. We must proactively protect these emerging use cases as they will play a critical role in the AI-driven future. Much like the effect we’ve seen on identity security with rapid cloud adoption, its acceleration has left an often-overlooked gap. Similarly, we are seeing a gap between the pace of AI and security: we don’t know where risk profiles actually lie and how they emerge as cyber attacks.
To address this, we must leverage the positive aspects of AI to plug these security holes, quickly predicting and identifying vulnerabilities in user behavior, so that we can prevent or detect deviations from normal patterns.
Manny Rivelo, CEO of Forcepoint
AI policies will evolve rapidly to keep pace with the market
In 2024, AI-related innovations will create new possibilities that we are not even considering right now. In the future, organizations of all sizes will need to create and expand corporate AI policies that govern how employees can safely interact with AI. And AI security policies will need to go beyond commercial AI tools to also cover internally developed GPTs and LLMs. At Forcepoint, we have web and data security solutions, all designed to prepare for the future adoption of emerging technologies like GenAI, no matter how quickly the technology landscape evolves.
Andy Patel, WithSecure researcher
Democratization of AI and uses by 2024: get ready
Open source AI will continue to improve and become widely used. These models herald a democratization of AI, shifting power from a few closed companies into the hands of humanity. A lot of research and innovation will take place in that space in 2024. And while I don’t expect supporters on either side of the security debate to switch sides, the number of high-profile proponents of the security debate is likely to grow. open source.
AI will be used to create disinformation and influence operations in the run-up to the high-profile elections in 2024. This will include synthetic content written, spoken and potentially even images or video. Disinformation will be incredibly effective now that social networks have reduced or completely eliminated their moderation and verification efforts. Social media will further become a cesspool of artificial intelligence and human-created garbage.
Rob Dooley (VP APJ) and Sabeen Malik (VP global government affairs and publicpolicy) by Rapid7
AI and automation
Given the volume of attacks, the use of AI and automation will accelerate in 2024. It’s one thing to see threat intelligence, but another to do something about it, and that will depend on more automated responses. On average, 14 hours pass between the identification and exploitation of new vulnerabilities, so with the advent of AI and more advanced automation techniques, much of the work of detection and remediation or prevention will be done automatically.
But some caution is needed. Inevitably, some AI capabilities will miss the mark simply because the solution was rushed to market. Additionally, with the continued adoption of ChatGPT, it may present risks. But like any new technology, while it can be used maliciously, the pace of innovation moves so quickly that it’s difficult to provide a concrete prediction about what will happen. If it is exploited, that doesn’t necessarily mean you shouldn’t use ChatGPT.
Reinhart Hansen, Chief Technology Officer, Imperva CTO Office
Organizations will get a ‘generative AI reality check’
Although the continued advancement of GenAI is inevitable, the hype surrounding it will take a hit in 2024. Like most technologies, its adoption will encompass both beneficial and detrimental aspects, often marked by exaggerated claims, particularly in its early stages of development. development. This is where the concept of “AI washing” comes into play, where companies falsely advertise the integration of AI into their products or services, misleading consumers. In this evolving landscape, one thing is certain: Cybercriminals will leverage AI to build new, never-before-seen attack vectors and generate new variants of existing vulnerabilities, leading to a wave of new zero-day attacks. The industry will need to work diligently to respond and mitigate these threats, ensuring that the promising future of AI remains safe and beneficial for all.
Craig Bates, Vice President of Australia and New Zealand at Splunk
AI will open a Pandora’s box of growing privacy and security problems.
The transformative power of AI, while promising for security professionals, has deployed a double-edged sword, raising concerns about growing challenges to privacy and security. This dichotomy takes center stage as the Australian federal government recently unveiled its Cyber Security Strategy 2023-2030.
What we are seeing is that today’s CISOs and IT professionals are not blocking this technology, but rather are widely leveraging AI as a tool for cyber defense. Generating new solutions that replace mundane technical tasks and support strategic functions, from improving data quality assurance to alert prioritization, security posture analysis and internal communication management. The government’s commitment underscores the urgency of addressing evolving cyber threats, using AI as a formidable ally in this mission.
On the other hand, the expanded attack surfaces resulting from the evolution of AI applications underscore a looming wave of security incidents, from weaponizing AI to introducing more realistic impersonations or widespread malware.
Amid these challenges, there is a silver lining in the anticipation of stricter privacy regulations in Australia. Striking the right balance between AI innovation and security is a pressing challenge. Regulatory efforts must deal with the dynamic and rapidly evolving landscape because, unfortunately, cybercriminals will not play by the same rules.
Josh Lemos, CISO at GitLab
AI will replace ‘turn left’ security with security automation
The leftward security shift was intended to fix security flaws earlier in the software development lifecycle, bringing it closer to the developer. But the consequence of this increased responsibility has burdened developers beyond reason. In 2024, shifted left security will be replaced by automating security outside of the developer workflow, something I call shifting down, as it moves security into lower-level, automated functions. AI will help automate the identification and resolution of security issues by reducing the security burden on developers with less and more actionable feedback.
Shane Maher, CEO of Intelliworx
AI and cybersecurity threats
The growing use of deepfakes through AI and ChatGPT poses a growing threat, making phishing emails more convincing and harder to detect. Organizations need to improve the security of email and the online collaborative environment, emphasizing knowledge of artificial intelligence tools to avoid the compromise of digital assets.
Matthew Koertge, CEO of Telstra Ventures
AI predictions for 2024
2024 will be the year the world better understands the true potential of generative AI.
While AI has been around for many years, ChatGPT demonstrated how impactful LLMs can be. This technology will find powerful applications in areas such as content creation, entertainment, business processes, personal assistants, customer service, healthcare and education. Generative AI will boost productivity by freeing up people’s time for higher-value work, creativity and innovation.
Oakley Cox, Technical Analyst Director at Darktrace
Generative AI will allow attackers to phishing across language barriers
For decades, most cyber social engineering, such as phishing, has been carried out in English. The language is used by millions of people in North America and Europe and dominates commercial operations in large areas of the rest of the world. As a result, it is not worth the effort for cybercriminals to take advantage of local languages when English can do the job just fine.
This has made APAC a relatively safe haven. The diversity of local languages has restricted the extent to which hackers can attack the region. Employees know to be on the lookout for phishing emails written in English, but are complacent when they receive emails written in their local language.
With the introduction of generative AI, the barrier to entry for writing texts in foreign languages has been drastically lowered. At Darktrace, we have already observed the increasing complexity of the use of the English language in phishing attacks. Now. We can expect attackers to add new language capabilities that were previously considered too complex to be worth the effort, including Mandarin, Japanese, Korean, and Hindi.
Additionally, phishing emails in foreign languages are likely to generate huge rewards for cybercriminals. Email security solutions trained with English emails are unlikely to detect attacks in the local language, and emails will land in the inboxes of those who are not used to receiving social engineering attempts in their native language .