Artificial intelligence (AI) tools have attracted a lot of attention recently due to their impressive capabilities.
These tools, like ChatGPT, for example, are powered by large language models (LLMs) that can generate complex writing, including research articles, poems, press releases, and even software code in multiple languages, all in a matter of seconds.
These AI tools have already started to revolutionize several industries, including software development, where emerging technology is being leveraged to speed up the development process.
However, along with the excitement around AI, there are also concerns about its potential risks. Some experts fear that the rapid advancement of AI systems could lead to a loss of control over these technologies and pose an existential threat to society. The question remains: will AI-based tools be useful in the long term or will they simply be a security risk waiting to explode?
Leverage AI for security
Despite current concerns and debate about the need to pause AI research and development, AI tools have already made their way into software development. These tools can generate code quickly (and in multiple languages), easily surpassing the speed of human developers. Integrating AI into cybersecurity tools also has the potential to improve the speed and accuracy of cyber threat detection, such as deploying it to analyze large amounts of data and quickly identify patterns and anomalies that may be difficult for attackers to detect. humans.
AI-enhanced security tools can also significantly decrease the number of false positives and help reduce some of the most time-consuming security tasks, allowing development and security teams to allocate resources to focus on critical issues.
Additionally, AI’s ability to respond to prompts without the need for extensive research or interviews offers a unique advantage, eliminating the need for humans to perform repetitive programming tasks, which often require working tirelessly 24 hours a day. .
A force for good and evil
While AI-enhanced security tools can be used to better protect organizations (and their end users) from cyber threats, the technology can also be used by malicious actors to help them create more sophisticated attacks and automate activities that mimic behavior. human without being detected by some software security tools. There are already reports of hackers leveraging artificial intelligence to launch machine learning-based penetration tests, impersonate humans on social media in platform-specific attacks, create fake data, and perform CAPTCHA cracking.
It is important to recognize that while modern AI tools excel in certain areas, they are far from perfect and, for now, should be considered a scaled-down version of the autocomplete feature commonly found on smartphones or tablets. email applications. And while AI can provide substantial assistance to people familiar with coding and help them perform specific tasks more efficiently, challenges will arise for those who expect AI tools to have the ability to produce and deliver complete applications.
For example, AI may provide incorrect answers due to biases within the data sets it is trained on or, when it comes to coding, tools may leave out crucial information and subsequently require human intervention and extensive security testing.
The Importance of Human-AI Collaboration and Monitoring in Application Security Testing
A recent demonstration of Synopsis Researchers highlighted the need for human oversight on AI-generated code after cases were observed where AI-generated code failed to identify an open source license conflict. Ignoring licensing conflicts can be very costly and create legal problems for an organization, highlighting the current limitations of AI-enhanced security tools.
There have also been cases where AI-generated code included open source code fragments that contained vulnerabilities; Therefore, it is imperative that organizations leveraging AI adopt comprehensive application security testing practices to ensure that the code they generate is free of licensing conflicts and security vulnerabilities.
For attackers and defenders alike, cybersecurity is a never-ending race. Now, AI is an integral part of the tools used by both parties. As a result, there is a growing importance for human-AI collaboration. As AI-assisted attacks become more sophisticated, AI-assisted cybersecurity tools will be needed to successfully counter the attack. By delegating these tasks to an AI-integrated security tool, humans can provide unique and actionable insights into how to best mitigate attacks.
The need for human intervention may decrease with each progressive step of AI evolution, but until then, the importance of maintaining an effective and comprehensive application security program is now more critical than ever.
Kelvin Lim is Director of Security Engineering, APAC, at Synopsys Software Integrity Group.