Autonomous car bombs, online recruitment: Experts worry how AI can transform terrorism
Experts worry that terrorists will find novel and problematic uses for artificial intelligence (AI), including new methods of delivering explosives and improving their online recruitment initiatives.
"The reality is that AI can be extremely dangerous if used with malicious intent," Antonia Marie De Meo, director of the United Nations Interregional Crime and Justice Research Institute, wrote in a report looking at how terrorists might use AI.
"With a proven track record in the world of cybercrime, it is a powerful tool that could conceivably be employed to further facilitate terrorism and violent extremism conducive to terrorism," she added, citing self-driving car bombs, augmenting cyberattacks or finding easier paths to spread hate speech or incite violence online.
The report, "Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes," concludes that law enforcement will need to remain at the cutting edge of AI.
TOM HANKS ISSUES WARNING ABOUT AI ADS FOR ‘WONDER DRUGS’: ‘DO NOT BE SWINDLED’
"It is our aspiration that [this report] is the beginning of the conversation on the malicious use of AI for terrorist purposes," De Meo wrote.
The report noted that the task of staying ahead of terrorists and anticipating how they can use AI will prove a stubborn task as it requires them to not only think of ways to use AI that no one has considered before, but then to figure out how to stop someone from employing that very method.
The report backs up a study from a collaboration between NATO COE-DAT and the U.S. Army War College Strategic Studies Institute, "Emerging Technologies and Terrorism: An American Perspective," which argued that "terrorist groups are exploiting these tools for recruitment and attacks."
THE 6-WHEELED ROBOT THAT CHECKS OUT DANGEROUS SITUATIONS SO HUMANS DON'T HAVE TO
"The line between reality and fiction blurs in the age of rapid technological evolution, urging governments, industries, and academia to unite in crafting ethical frameworks and regulations," the authors wrote in the forward.
"As geopolitical tides shift, NATO stresses national responsibility in combating terrorism and advocating for collective strength against the looming specter of technology-driven threats," the authors added.
EXPERTS WARN AI COULD GENERATE ‘MAJOR EPIDEMICS OR EVEN PANDEMICS’ - BUT HOW SOON?
The study notes general case uses of OpenAI’s ChatGPT to "improve phishing emails, plant malware in open-coding libraries, spread disinformation and create online propaganda."
"Cybercriminals and terrorists have quickly become adept at using such platforms and large language models in general to create deepfakes or chatbots hosted on the dark web to obtain sensitive personal and financial information or to plan terror attacks or recruit followers," the authors wrote.
"This malicious use is likely to increase in the future as the models become more sophisticated," they added. "How sensitive conversations and Internet searches are stored and distributed over AI platforms or via large language models will require more transparency and controls."
Earlier this year, West Point’s Combating Terrorism Center published research on the subject, focusing on the ability to improve planning capabilities for terrorist attacks, moving beyond merely enhancing what they’re already doing.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
"Specifically, the authors investigated the potential implications of commands that can be input into these systems that effectively ‘jailbreak’ the model, allowing it to remove many of its standards and policies that prevent the base model from providing extremist, illegal, or unethical content," the authors explained in their abstract.
"Using multiple accounts, the authors explored the different ways that extremists could potentially utilize five different large language models to support their efforts in training, conducting operational planning, and developing propaganda."
Their testing revealed Bard as the most resilient to jailbreaking — or bypassing guardrails — followed by each of the ChatGPT models. Mainly, they found indirect prompts relatively sufficient to jailbreak a model in more than half the cases.
The study found that jailbreak guardrails need constant review and "increased cooperation between private and public sectors," including academia, tech firms and the security community.