ChatGPT is a term that has apparently become table talk over the last month. It seems like you can’t go anywhere without hearing or reading about how emerging technology responds to questions or prompts in human-like text.
Its ability to generate words has been used to make digital life more efficient, stimulate creativity, and even write music, plan events, and support technical tasks. But as with any new technology, there are concerns about its potential misuse and cybercriminals taking advantage of technical advances to create sophisticated phishing emails, ransomware and malware.
Emerging Cybersecurity Threat Landscape
While tools like ChatGPT attempt to limit malicious input and output, threat actors are always exploring unique ways to leverage emerging technology for nefarious purposes. In the wrong hands, computer-generated content can be used to spread misinformation, create false identities and manipulate public opinion, exacerbating social and political unrest, especially in countries like Australia. The latest from Trellix Threat Report: February 2023 examined the already fragile state the country finds itself in. Malicious activity in the fourth quarter of 2022 revealed that Australia was the sixth most affected country by LockBit 3.0 ransomware, one of the most aggressive forms of ransomware.
The report also highlights that the shipping and shipping sector has been particularly affected by nation-state activity, with 69 percent of attacks targeting the critical infrastructure industry. This has important implications for supply chain security, as disruptions to transportation and shipping can have a ripple effect on other industries. The energy, oil and gas sectors are also at risk, emphasizing the importance of strong cyber defenses in these critical infrastructure industries.
Business email compromise (BEC) is another growing threat, and spoof CEO emails are a common tactic used by cybercriminals recently. According to the Trellix Threat Report78 percent of BEC attacks involve fake emails from CEOs that use common CEO phrases. This represents a 64 percent increase over the past three months, indicating that this tactic is becoming more prevalent. In some cases, cybercriminals use vishing schemes to extract sensitive employee information. This involves using a fake phone number to call employees and ask them to confirm their direct phone number, which can then be used in future attacks. Perhaps most worrying is that 82 percent of these attacks are sent using free email services, meaning threat actors don’t need special infrastructure to run their campaigns. This makes it easier for cybercriminals to carry out attacks and harder for organizations to defend themselves.
As we explore new technological advances like ChatGPT, organizations must make the most effective use of scarce resources to protect against potential emerging threats. Cybersecurity must be a top priority during this era of exploration, as both businesses and individuals take steps to protect themselves against the growing threat of cyberattacks. This includes implementing strong security protocols, updating software regularly, and educating employees across industries on how to recognize and respond to potential threats.
The hidden truth
These threatening possibilities have raised the idea that the emergence of computer-generated content has become a battleground for both benign and malicious intentions in Australia. However, it is essential to remember that ChatGPT itself is not malicious. In fact, the innovative tool has the potential to make the lives of Australians easier and more efficient, and even support cybersecurity professionals in a variety of ways.
As a language model, it has been trained to understand natural language and can be used to make difficult concepts easy to understand by explaining them in simpler terms. Through this model, the powerful artificial intelligence (AI) tool can potentially aid in the complexity of cybersecurity by developing code, steps, guided investigations, and plans that can help combat potential threats. Its ability to understand and respond to queries in natural language makes it a valuable asset in supporting sophisticated challenges, while its scalability and adaptability make it easy to help large organizations manage systems.
In addition to helping cybersecurity professionals, ChatGPT can help people generate text for routine tasks like responding to emails, composing social media posts, creating blog posts, or scheduling meetings, freeing up time for more important tasks. It can also be used to provide personalized recommendations based on a user’s preferences and past behavior, making it easier to find the information and services that are most relevant to them.
As ChatGPT is exposed to more data, it will better understand and respond to natural language queries. This means that it can adapt to new opportunities and challenges over time, making it a valuable asset to our society, like many other previous technological advances of our time. As the world becomes more dependent on technology, innovations like ChatGPT will play an increasingly important role in making our lives easier in all aspects of our lives.
Untapped success on the horizon
The fact of the matter is that we are entering a new era of technology. It’s time to start looking at how to integrate these services and learn new skills so we can be more effective. As we explore those opportunities, we must remain alert to potential risks and remember that it is still an evolving technological tool with plenty of room to grow.
Incorporating security solutions such as email security, endpoint security, data loss prevention and network detection and response can help keep Australian organizations protected. There is a real opportunity to leverage AI technologies to make what we do more efficient and effective if we can keep systems protected as we explore this new frontier.
Luke Power is ANZ Managing Director at Trellix.