AI's Dark Underbelly: Militant Groups Harness Artificial Intelligence, Raising Alarm

Published 2 days ago2 minute read
Uche Emeka
Uche Emeka
AI's Dark Underbelly: Militant Groups Harness Artificial Intelligence, Raising Alarm

As the global community increasingly embraces artificial intelligence, militant groups are also exploring this technology, though their exact applications are still evolving. National security experts and spy agencies have issued warnings that AI could become a potent instrument for extremist organizations, facilitating member recruitment, generating realistic deepfake images, and enhancing cyberattacks.

A recent instance saw a user on a pro-Islamic State group website advocating for the integration of AI into operations, highlighting its ease of use and urging supporters to actualize intelligence agencies' fears regarding AI's contribution to recruitment. Given IS's prior success in leveraging social media for recruitment and disinformation, their current experimentation with AI is a logical progression, according to national security experts.

For loosely organized, under-resourced extremist groups, or even individual bad actors with internet access, AI offers the capability to produce propaganda or deepfakes at an unprecedented scale, thereby expanding their reach and influence. John Laliberte, CEO of cybersecurity firm ClearVector and a former NSA vulnerability researcher, noted that AI significantly simplifies tasks for adversaries, enabling even small, financially constrained groups to make a considerable impact.

Militant groups initiated their use of AI shortly after programs like ChatGPT became widely accessible. In the subsequent years, they have increasingly employed generative AI to create convincing photos and videos. When combined with social media algorithms, this fabricated content can effectively attract new adherents, sow confusion or fear among adversaries, and disseminate propaganda on a scale unimaginable a few years ago.

Instances of this malicious use include the spread of fake images during the Israel-Hamas war two years ago, depicting bloodied infants in destroyed buildings. These images incited outrage and polarization, obscuring the genuine atrocities of the conflict and serving as recruitment tools for violent groups in the Middle East and antisemitic hate groups globally. A similar scenario unfolded last year following an attack claimed by an IS affiliate in Russia, where AI-generated propaganda videos swiftly circulated on discussion boards and social media to attract new recruits.

Furthermore, researchers at SITE Intelligence Group, a firm monitoring extremist activities, have documented IS's evolving use of AI, including the creation of deepfake audio recordings of its leaders reciting scripture and the rapid translation of messages into multiple languages.

Marcus Fowler, CEO at Darktrace Federal and a former CIA agent, categorizes the more sophisticated uses of AI by these groups as

Recommended Articles

Loading...

You may also like...