Policing alt-tech for disinformation? VIGILANTly moving in the right direction

Policing social media is challenging (William et al., 2021), not only due to the scale (Alexandre De Streel et al., 2020) but the decentralised nature of social media platforms make it difficult for law enforcement authorities to monitor illegal behaviour (Emile Mbungu Kala 2024). Researchers have argued for the presence of police on social media (Kershaw, 2023) and the challenges (Ibid; Jonathan Abel, 2022) that it might bring.

Write Zaur Gouliev and Dr Sarah Anne Dunne from the Centre for Digital Policy at the University College Dublin.

The article addresses the rapid spread of disinformation on social media and its potential to incite violence, advocating for proactive policing of online content. Features like algorithmic amplification and anonymity enable disinformation to reach millions and foster harmful behaviors. Disinformation campaigns, often coordinated by trolls or bots, complicate efforts to distinguish between authentic and inauthentic content. Social media companies use machine learning to detect these campaigns, but classification issues persist.

Policing social media is crucial due to its influence on radicalization, extremism, and violence, particularly on alternative platforms like Gab, Parler, and Telegram, where disinformation thrives unchecked. These "alt-tech" platforms have been linked to real-world violence, including riots in Dublin (2023) and London (2024), which were exacerbated by disinformation on Telegram. Attempts to regulate social media face challenges, such as jurisdictional issues, varying free speech laws, and the lobbying power of tech companies. The Irish media regulator, Coimisiún na Meán, is tackling this through the Online Safety Code, which holds companies accountable for protecting users from harm.

A key point is the reliance on self-regulation, where users report harmful content, but this approach has limitations, particularly as moderation is often outsourced to poorly trained workers. Additionally, social media platforms profit from engagement with disinformation, creating a conflict between user protection and freedom of expression.

The VIGILANT project is presented as a proactive solution, equipping law enforcement with tools to detect disinformation early and prevent violence. To illustrate its application, the article presents two case studies: the Southport Riots and the Dublin Riots. Both demonstrate how VIGILANT could have played a critical role in minimising or reducing the escalation of violence.

Read the article on UCD website!