HomeTechAnalysis: UK riots show how social media can fuel real-life harm. It’s...

Analysis: UK riots show how social media can fuel real-life harm. It’s only getting worse | CNN Business

Date:

Related stories

Guinness raids its Irish reserves to ease UK shortages amid gen Z demand

Guinness is raiding its reserves in Ireland to boost...

UK banks’ trust account exodus cuts lifeline for disabled people, says charity

People with disabilities are facing potential hardship because banks...

Tottenham vs Liverpool live updates: Premier League predictions, team news and latest score

Capacity: 62,850First used: 2019London’s biggest club stadium was built...

The four English counties named among the best places in the world to visit

Colchester Castle (Image: Getty)An area which boasts a unique...
spot_imgspot_img



CNN
 — 

The widespread anti-immigrant riots in the United Kingdom of the past week, and the false viral claims that fueled them, may be the clearest, most direct example yet of the way unchecked misinformation on social media can produce violence and harm in the real world.

Even after authorities identified a UK national as the suspect behind a series of deadly stabbings targeting children, false claims about the attacker’s name and origins continued to stoke anti-immigrant fervor and propel far-right demonstrations.

The fake claims have circulated widely, particularly on X, the platform formerly known as Twitter, extremism researchers said. And police have openly blamed that misinformation for the violence that has wracked the country in recent days, with rioters throwing bricks at mosques, setting cars on fire and chanting anti-Islamic slogans while clashing with officers in riot gear.

The events of the past week are hardly the only example of the link between online misinformation and politically motivated violence: From the Rohingya genocide to the attack on the US Capitol on January 6, 2021, false and misleading claims have consistently been at the center of high-profile incidences of political unrest and violence.

It is a pattern that keeps repeating despite years of calls by governments and civil society groups for social media platforms to rein in inflammatory, hateful posts, as well as pledges by companies themselves to do more.

A recent retreat from content moderation by some major platforms, however, suggests that the problem of violence fueled by misinformation may well get worse before it gets better.

For nearly a decade, governments and civil rights groups have increasingly argued that online platforms have created enormous societal costs.

Critics of social media have repeatedly accused the industry of putting corporate profits before users’ mental health, or opening the door to foreign meddling, without doing enough to shield the world from those risks.

An economist might call these negative externalities – like pollution, they are byproducts of a profit-seeking business that, left unaddressed, everyone else must either learn to live with or mitigate, usually at great collective expense. The consequences tend to play out over long timeframes and with large-scale, systemic effects.

This week, it is hard to avoid wondering whether politically motivated violence based on nothing more than bad-faith, evidence-free speculation has become a permanent fixture among social media’s various externalities, and if we are being asked to make peace with it as a condition of living in a digitally connected world.

Many social media companies have invested heavily in content moderation over the years. But the industry’s recent track record hints at a bet – or perhaps a hope – that just maybe, the public will tolerate a bit more pollution.

There are some signs of pushback. In the European Union, officials are looking to hold social media companies accountable for spreading misinformation under the new Digital Services Act. In the UK, the Online Safety Act could take effect as soon as this year, requiring, among other things, social media platforms to remove illegal content.

And even tougher rules may be on the way as a result of the riots. “We’re going to have to look more broadly at social media after this disorder,” UK Prime Minister Keir Starmer said in a video distributed to media Friday.

But punishments for online wrongdoing are already being handed out to individual perpetrators. On Friday, Jordan Parlour from Leeds, England, was sentenced to 20 months in jail after being convicted of publishing written material intended to stir racial hatred. The 28-year-old had posted the material on Facebook.

The United States has lagged on platform regulation, partly due to congressional dysfunction and partly because of legal and constitutional differences that grant online platforms more freedom to manage their own websites.

Still, lawmakers made some moves last month when the US Senate passed the Kids Online Safety Act, which aims to combat mental health harms for teens linked to social media.

It may be tempting to dismiss social media’s role in the UK riots as merely a reflection of latent political trends or the result of activism that would have happened on other platforms anyway.

But that distracts from the calculation that some platforms appear to have made: At least some of the time, some amount of misinformation-fueled violence is a reasonable cost for society to pay.

Olesya Dmitracova and Kara Fox contributed reporting.

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img