HomeTechHow a few Twitter posts may have inflamed the violence hitting the...

How a few Twitter posts may have inflamed the violence hitting the UK

Date:

Related stories

spot_imgspot_img

Even before much at all was known about the killings in Southport, online speculation had began. Much of it was false: posts about the attacker’s name, and supposed Muslim identity, were trending long before people could know the actual facts.

But they had already done what they needed to. A vast and violent far-right response had been mobilised, and almost a week later has seen towns across the country overtaken by violent riots and disorder.

Experts agree that it is impossible to say how much of that unrest is the result of those false posts and information. But it is clear that they helped inflame a narrative that was untrue but nonetheless helpful to far-right communities that were already organised.

Anti-migration protesters are seen during riots in Manvers, near Rotherham
Anti-migration protesters are seen during riots in Manvers, near Rotherham (PA)

Follow our live coverage of UK riots

“The riots are a response to the accusation that the killer in the Southport stabbings was a Muslim, and that is categorically false. So it’d be hard to say that misinformation didn’t have a role in the current moment,” says Marc Owen Jones, an associate professor of digital humanities at Northwestern University in Qatar. “Secondly, much disinformation – though not all – is spread on social media, and that is increasingly how people communicate, and that’s how much of the particular disinformation in these cases was spread, so I think again it’s very hard to say that social media doesn’t play a key role in this.

“But it’s not the only thing; it’s not only the immediate moments before and after the killings that are the issue. There has been a diet of disinformation and xenophobic, anti-migration misinformation going on for months, targeting the UK, so I think we have to consider how all of this plays a role in what happened.”

It is not necessarily possible to say how much the misinformation is to blame compared with more longstanding motives, because that misinformation works precisely by spreading among people who are ready to receive it. Sander van der Linden, a professor of social psychology and disinformation expert at the University of Cambridge, likens it to a virus – and misinformation experts often use models from epidemiology to track the spread of false information – in the way that it needs to find a “susceptible host”. Just as people may be more at risk of falling sick, they may be more at risk of falling for lies.

“I think that’s where it intersects with the far-right communities that have been spreading anti-immigrant, anti-Muslim misinformation for a long time,” he says. “They have a playbook, they have a long history of doing this, so they can easily leverage their networks and their rhetoric around it.”

Much of the false information about the attack seemed to come from a website called Channel 3 Now, which generates video reports that look like mainstream news channels. But its video and its false claims about the name of the attacker might have stayed relatively obscure if they were not highlighted by larger accounts.

On X, users with considerable followings quickly shared that video and spread it across the site. And on other platforms such as TikTok – where videos can go viral quickly even if the accounts posting them do not have large followings, because of the app’s algorithm – they racked up hundreds of thousands of views. At some point, the false name of the attacker was a trending search on both TikTok and X, meaning that it showed to users who might otherwise have shown no interest in it at all.

A large part of the inflammatory misinformation flared up on X, the social media site that is now owned by Elon Musk. In the time since he bought what was then called Twitter, at the end of 2022, both the site and his personal account have been repeatedly criticised for allowing both false and dangerous content to flourish.

Social networks have long built their systems to encourage engagement, which helps bring money, and inflammatory and antagonistic posts have long been a quick way to encourage that engagement. But experts agree that it is very unlikely that misinformation would have spread in the same way as it has done on the old Twitter, before Mr Musk took it over.

He straight away fired many of the staff in its safety team, for instance, which helped work against misinformation, and that came with a weakening of both the rules on misinformation and their enforcement. (Twitter’s rules do still officially ban “inciting behaviour that targets individuals or groups of people belonging to protected categories”.) He also changed the Twitter API in a way that made it usually prohibitively expensive for researchers to gather tweets, making tracking that misinformation and its spread nearly impossible. Experts also say that the relative lack of enforcement on X has made it easier for other social networks to lower enforcement on misinformation and hate speech; there are few real legal requirements on those companies, and much of the pressure to do so is social.

There are some relatively easy measures that all companies could take to limit the spread of this information, researchers note – and point out that X was previously doing at least some of them. It could more fully enforce its rules against hate speech, to at least temporarily limit the accounts that are posting it, or it could rethink its verification tools that mean anyone can pay to have their content boosted. It may eventually be forced to do some of those things, with an increased focus on its policies from the European Union that could lead to more scrutiny from other governments.

But X is just one site in a vast and active media ecosystem that helps propel information quickly, with little concern for whether it is real or not. Posts that begin on X will make their way to other social media platforms as well as chat apps such as WhatsApp and Telegram – which are private and therefore difficult to track – and information will also flow the other way. But X is notable because it allows people to build a large platform quickly, and spread information to mainstream audiences and people who might not otherwise follow such personalities.

X did not respond to a request for comment on any of these issues from The Independent. The company fired much of its press team in the wake of the takeover from Elon Musk.

Elon Musk’s personal account has often been used to draw attention to other controversial accounts that have been linked with the unrest. Mr Musk often replies to interesting posts with exclamation marks or eyes emoji – and did so last week, in response to posts from Tommy Robinson.

Those replies appear short, but they can help push the posts into other people’s feeds, who may not have opted to follow the original accounts. Professor Jones has shown, for instance, that similar replies from Mr Musk have helped rapidly increase the engagement on posts that may otherwise have stayed relatively obscure.

For the most part, Mr Musk has simply engaged with those controversial posts, rather than sharing them himself – but it nonetheless means that many people on the site will see it, especially given he is its most followed user by far. But occasionally he has seemed to help boost misinformation of his own.

Over the weekend, Mr Musk posted in response to a video of violent riots in Liverpool. “Civil war is inevitable,” he wrote, one of just a number of inflammatory posts he has written in response to the UK’s ongoing unrest. It drew condemnation from the UK government and others.

Mr Musk’s posts on these topics and similar ones have increased in the wake of his takeover on the site. Before he took it over, he had remained largely resistant to endorsing any political view. But in the time since, he has explicitly backed Donald Trump as well as posting more generally about right-wing talking points.

He has said that his focus on what he called the “woke mind virus” arose in part because of his trans child, whose transition he objected to. But Professor van der Linden also noted that Mr Musk may be “radicalising himself on his own platform”.

“If you look at it over time – he was somebody who was talking about solving climate change and working on big things, and then all of a sudden has drifted into this echo chamber of conspiracy theories and science denial on extremism and racism. And how do you find yourself in such an environment? What’s changed between then and now is that he’s spending a lot of time on X.

“It’s not a causal experiment, and some people disagree with the echo chamber hypothesis. But I think if there’s one example where it’s pretty clear that somebody’s spending a little too much time on X, maybe it’s Elon Musk.”

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img