Experts want a pause on AI, but one security expert believes education and regulation are key
Last week, a group of industry experts signed an open letter calling for a pause in the current pace of AI development.
Writers include people like Skype co-founder Jaan Tallinn, Apple co-founder Steve Wozniak, and politician Andrew Yang, along with several artificial intelligence (AI) specialists. Elon Musk, who is struggling with his recent IT purchase, was also one of the signatories.
The letter was published by the Future of Life Institute and essentially states that AI research is “out of control” and is advancing too quickly and in ways that not even the creators of this new AI can adequately predict.
“Therefore, we ask all AI laboratories to immediately suspend for at least six months the training of AI systems more powerful than GPT-4,” the letter said. “This pause should be public and verifiable, and include all key actors. “If that pause cannot be implemented quickly, governments should step in and institute a moratorium.”
But one security expert believes a pause is not the only answer to the challenges AI presents us.
“AI research is advancing at such a pace that we need to regulate it or learn to defend against its products before they get out of control,” Jake Moore, global cybersecurity advisor at ESET, said by email. “The answer lies not only in stopping rapid growth, but also in adopting the right regulation and working together to align the future.”
“To regulate AI, data quality must be regulated in the early stages, but problems could arise when different counties and regions adopt different policies. “Deepfakes, for example, are a significant cybersecurity threat and require attention, but regulation will be difficult to implement globally and therefore awareness and education remain key.”
“More educational and technological countermeasures will be needed as cybercriminals are cleverly using these new techniques to attack people and businesses,” Moore said.
“With AI-powered phishing set to grow in number and quality and deep learning autonomously generating quality fakes, we will soon need more forms of multi-layered security to approve even the most trivial transactions to prove identity.”