While AI has become a tool for spreading malign content, including terrorist propaganda, it also holds the potential key to countering such content. The latest report from Tech Against Terrorism Europe delves into this dual nature of AI, exploring its role in both the proliferation and prevention of online terrorism.
The internet has become a fertile ground for terrorist groups, enabling them to disseminate extremist content, radicalize individuals, and recruit members. The sheer scale of this issue is highlighted by Tech Against Terrorism's findings, which show terrorist content on 187 different online platforms between November 2020 and January 2023. Facebook's removal of over 56 million pieces of terrorist propaganda in 2022 and YouTube's deletion of 275,261 violence-promoting videos in the same period underscore the magnitude of the problem.
While most of this content was flagged and eliminated through automated tools, technology alone cannot cure social media's ailments. Automated content moderation, whether based on matching or classification, has its limits and must be augmented with human oversight and intervention. This approach presents significant challenges, especially in terms of the resources required for platforms of varying sizes to effectively monitor and manage online content.
So, how to address this situation and make the most of AI for countering terrorist online content? Read the full report by Tech Against Terrorism Europe here and follow the activities of the VIGILANT project.
The project is a joint effort of leading European researchers and four European police authorities from Spain, Greece, Estonia, and Moldova. The main goal is to develop a sophisticated platform equipped with tools for detecting and analyzing malign content, aiming to arm police authorities with both the technical capabilities and institutional knowledge needed to identify and counter disinformation linked to criminal activities and other harmful illegal content.