Disinformation can cause significant harm to society, reduce trust in governments, disrupt the rule of law and promote civil unrest. It directly contributed to the rioting in Dublin in November 2023, has been used to target elections in Germany, France, Spain and Italy, and promotes the spread of false medical information and treatments for Covid, Cancer, Autism, and the MMR and HPV vaccines. This article gives an overview of two ongoing 3-year Horizon Europe sister projects, VIGILANT and FERMI that both develop technical platforms to support law-enforcement agencies (LEAs) in the fight against disinformation campaigns, including those that target free and fair elections.
VIGILANT and FERMI can help combat disinformation in the context of electoral campaigns
Europe is facing an increasing number of disinformation campaigns by foreign states such as Russia and China. These are aimed at exacerbating divisions in society, promoting extremist groups who are already eager to push their narratives, and attacking political figures who are critical of them while promoting friendly politicians and commentators. The European Parliament elections and the dozen other national elections in Europe in 2024 are another opportunity for the aforementioned countries and others to sow disunity and tap into the anger and frustration of numerous political and social movements across Europe. For example, far-right extremists are known to have benefitted from foreign influence operations and to have heavily relied on disinformation activities of their own to get their message across. Such disinformation campaigns might question the outcome of the election, which may lead to violent unrest as it did in the United States and Brazil where government buildings were stormed amidst tensions over election results.
The VIGILANT and FERMI platforms will enable LEAs to detect, investigate, monitor and respond to such disinformation campaigns and to do in-depth impact assessments of digital content in multiple formats, in multiple languages, and from multiple sources, so proper counter-measures can be taken. Accordingly, each platform has a range of unique capabilities, some of which include:
Detection of disinformation
- detection of likely disinformation in the content ingested using indicators such as manipulated images and videos, narrative analysis, false claim detection and comparing content against known examples of disinformation, fake images and false claims
Investigation of disinformation activities
- investigation of coordinated link and content sharing which may be indicative of a campaign such as an organised attempt to sway public opinion and or influence an election, so a graph analysis reveals what accounts have communicated with each other indicating their role and influence in a disinformation campaign
- revealing if accounts used to share disinformation are run by humans or bots (in the form of a likelihood assessment), so LEA investigations can be directed at the people behind the accounts
Monitoring of disinformation activities
- grasping the influence of each social media account (via an influence score), so LEA monitoring can be directed at those behind particularly menacing activities
- narrative, claim sentiment (positive, negative and neutral) and stance detection in multi-party conversation analysis to enable LEAs to quickly analyse large volumes of content to identify potential targets and locations of attack
- identify manipulated images and videos and compare them to datasets of known fake and manipulated images and videos to enable police units see which narratives and claims are being reused.
Identifying and responding to threats of crime
- identification of key words, phrases or dog-whistles used in disinformation campaigns which may promote false narratives or a violent attack on migrants, refugees or members of a marginalised community or which may be used to sway public opinion for or against a political figure or party in an election
- estimating the near-term crime landscape (the likely number of criminal incidents in disinformation-sensitive fields such as vandalism, disorderly conduct and assault in NUTS-2 regions) so LEAs can try to thwart or contain illegal activities in due course, amongst other things by dispatching police units trained and prepared to rein in riots in the event extremists incite violent demonstrations before or after an election
Impact assessment and counter measures
- helping LEAs to understand the impact of disinformation campaigns and to recommend potential interventions to mitigate against their effect
- perform cost estimates of criminal activities (in the form of an impact score from 1-5 capturing all costs resulting from disinformation-induced crime) and propose specific anti-disinformation measures, if necessary (the three most suitable counter measures to address the situation at hand are shared with end-users if the impact score is at least 3; counter measures include the removal of online content, education campaigns etc.)
Initial results from FERMI, VIGILANT
The VIGILANT project will equip European LEAs with advanced technologies from academia to detect, analyse, investigate and combat disinformation linked to criminal activities. The FERMI project has developed an integrated platform that includes tools that can facilitate investigations, threat assessments, assess the likely impact of disinformation campaigns and propose counter measures if necessary. VIGILANT’s platform of modular tools will enable LEAs to be more effective and efficient in their work while building institutional knowledge of the social drivers and behavioural dynamics behind the phenomena. VIGILANT can also be used to investigate and monitor hate speech, radicalisation, incel, extremist, violent separatist, nationalist, and terrorist related content. The VIGILANT project includes advanced training for LEAs on how to use the VIGILANT platform, the underlying social drivers and behavioural dynamics behind disinformation, and the setting up of a long-term peer-to-peer support network for LEA units.
Investigation-wise, the FERMI platform can identify human-operated accounts (whose owners may be investigated (unlike bot-operated accounts)), collect evidence by grasping the spread of disinformation on social media, and analyse the influence of the messages at stake. Threat assessments are conducted by analysing the atmosphere surrounding the disinformation campaign on social media and estimating the likely crime landscape to enable LEAs to take proper precautionary measures. Specific recommendations on how to counter disinformation campaigns are provided to the user, in the event the probable costs resulting from disinformation-induced crime are found to be likely high or rather high.
To achieve this, both projects have undertaken extensive engagement with LEAs to assess their needs, with disinformation, legal and ethics experts to ensure that both project outputs meet the needs of LEAs while also being ethically and legally compliant, and with policy makers to keep them up to date on efforts to combat disinformation and to provide them with expert insights and policy recommendations.
Planned work ahead for VIGILANT, FERMI
Both projects are now entering the second half of their 3-year lifecycles and are moving to evaluation, deployment and training phases. The VIGILANT consortium has completed the development of a Minimum Viable Prototype (MVP) of the VIGILANT platform which is being evaluated by partner LEAs. Future work will focus on integrating the remaining detection, analysis and monitoring tools, increasing the number of languages the platform supports, and integrating additional tools to better enable LEAs to understand the impact of a disinformation campaign and potential response interventions to mitigate its negative effects. The VIGILANT project is also actively recruiting LEAs to join its Community of Early Adopters for LEAs interested in adopting the platform but who are not part of the consortium. Providing LEA training and establishing a sustainable long-term support network will become the focus towards the end of the project.
An in-depth experimentation protocol has been drafted and amended that will guide the FERMI platform’s validation. Amongst other things, the protocol includes three use cases (on violent right-wing, left-wing and Covid-related extremism). The validation efforts have been divided into three pilots (each of them covering one use case) that will be carried out by different LEA (-affiliated) partners. The platform will then be fine-tuned as a result of end-user feedback. As in the case of VIGILANT, LEAs will be trained by the FERMI consortium in using the platform and communication, dissemination and especially exploitation activities will be doubled down on.
Results from the synergies between FERMI, VIGILANT
As sister projects, VIGILANT and FERMI operate in close partnership with each other. This includes sharing each other's project updates on their social media channels and among their respective networks, organising knowledge sharing workshops, consulting with each other on data access, and dissemination of project materials to LEAs. In the future, the projects aim to collaborate on LEA training to combat disinformation, organise joint dissemination and communication events, author joint research and policy papers, and organise joint technical demonstrations of their tools for each other and the wider expert community.
The projects are also keen to work with policy makers to ensure that there is a long lasting positive effect from the projects and the knowledge built up over their lifespan. They have recently joined a cluster of Horizon Europe projects focused on hybrid threats so that they can share their knowledge and experiences and provide an expert forum for policy makers to consult with.
Read more: REA Democracy webpage
Further information about each project and the capabilities of each platform, can be found on their respective websites or through their social media channels. Both projects welcome enquiries from LEAs, researchers, experts and policy makers.
Links to the projects’ websites
VIGILANT
- Website: https://www.vigilantproject.eu/
- X | Twitter: https://twitter.com/EUvigilant
- Facebook: https://www.facebook.com/profile.php?id=100087492975842
- LinkedIn: https://www.linkedin.com/company/the-vigilant-project/
FERMI
- Website: https://fighting-fake-news.eu/
- X | Twitter: https://twitter.com/fermi_project
- YouTube: https://www.youtube.com/@fermi-project
- LinkedIn: https://www.linkedin.com/company/fermi-project/
- Mastodon: https://mastodon.social/@fakenewsriskmitigator