Social Media is running riot

The loss of a child is every parent’s worst nightmare. The loss of three innocent children in the Southport knife attack, senselessly murdered by a 17-year-old boy, must leave an unimaginable gulf in the lives of their parents, family, and friends. Yet that wasn’t the reaction of all hearing the news of their murder; some callously saw it as an opportunity. Almost immediately after the attack, a user of X (formerly Twitter) posted a screenshot of a LinkedIn post falsely purporting to be from one of the parents, claiming the attacker was an immigrant. This X user asserted that the attacker was called Ali al-Shakati, who arrived in the UK illegally by boat last year and was on an MI6 watch list. According to the Police, none of this was true.

 

Yet, this disinformation spread rapidly on X, with posts on small accounts being picked up by larger accounts with more followers. These were then amplified by news channels such as Channel3Now and Russia Today – one of whose posts were reposted by Elon Musk. X was far from alone, with TikTok and a host of other platforms being used to spread the disinformation. In part, this rapid viral spread was driven by X’s and TikTok’s recommendation engines, which featured the disinformation in their “Topics trending in the UK” and “What other users searched for” features. Messaging platforms such as Telegram, Signal and WhatsApp were then used to orchestrate the flow of anger from the digital world to the physical streets, leading to the worst rioting in the UK for a decade.

 

In the aftermath of the rioting, the media and politicians focused on the role of social media platforms in sparking the violence. The UK’s ‘Online Harms Act’ makes platform owners responsible for illegal content, such as incitement to violence and racism. More problematically, it tries to address content that may be considered harmful but not illegal. This is not easily codified, as the same content can be disinformation, fiction or a joke, depending on its context. Further, when should a sincerely held view, no matter how irrational, become unutterable? To navigate this line, the ‘Online Harms Act’ mandates the platform providers to offer tools to their users that can block certain types of content – such as that relating to misogyny, eating disorders or self-harm. This requires the platform providers to decide whether the content falls into these categories. How they will do this is opaque, with no clear rules or appeals process for those who feel their voice has been unreasonably silenced. The Act, in effect, outsources censorship of the Internet to private companies without proper public governance. Yet, what alternatives exist?

 

In a deep sense, this dilemma between letting harmful disinformation flow and censoring it is a result of the architecture of today’s internet. This is an architecture where the Internet platforms control our data and use it to target us with content. As such, they are the only point of control who can restrict what content we are shown. When we use X, Facebook or TikTok, we enter an information bubble of their design, even if they give us tools to control the bubble’s size. Yet there are alternatives. It’s perfectly feasible to give control over filtering the content back to individuals in the form of data and tools that live on their devices. These AI-powered filtering tools can be connected to fact-checking services of the individual’s choice, the ones they trust, to empower them to filter their own content. Public education campaigns could help individuals comprehend the consequences of setting their tools to different risk levels. This provides individuals with the means to minimise the likelihood of being exposed to disinformation. After all, people don’t want to be fed lies and misled, and it’s short-sighted not to trust and empower citizens to be an active part of the solution. This approach prevents power from becoming concentrated in corporations or the state and preserves the individual’s agency.

 

By Matt Stroud, author of ‘Digital Liberty’ – published by Buckingham University Press

Previous
Previous

Video: Social Media is running riot.

Next
Next

Whose AI will cast your vote?