The digital realm has significantly exacerbated the anti-immigration unrest spreading through British towns and cities. Prominent provocateur Elon Musk, far from being a passive observer, has amplified the discord. The CEO of Tesla and proprietor of X voiced on the platform on Sunday that “civil war is inevitable,” in reaction to a post attributing the violent riots to “mass migration and open borders.”
On Monday, a UK prime ministerial spokesperson countered Musk’s assertion, asserting, “there is no justification for that.” Musk’s choice to bolster anti-immigrant sentiments underscores the peril of misinformation proliferating online, which is inciting tangible violence—a pressing issue for the UK government. The government pledged on Tuesday to hold accountable both the instigators of the riots and their online supporters.
Later that day, a 28-year-old man from Leeds was charged with using “threatening words or behavior intending to incite racial hatred” online, as reported by the UK Crown Prosecution Service. These charges were linked to “alleged Facebook posts,” according to Nick Price, the CPS director of legal services.
In recent days, rioters have wreaked havoc on public property, ignited vehicles, and hurled bricks at law enforcement. They have also torched two Holiday Inn hotels in northern and central England, believed to be accommodating asylum seekers awaiting their claims’ resolution. Hundreds have been detained.
The disturbances erupted last week following far-right groups’ social media claims that the individual charged with a brutal stabbing attack, resulting in three children’s deaths, was a Muslim asylum seeker. This disinformation campaign fueled widespread outrage against immigrants.
The alleged perpetrator, identified as 17-year-old Axel Rudakubana, was actually born in the UK, as confirmed by police. Nonetheless, erroneous claims about the attack—Britain’s deadliest mass stabbing of children in decades—spread rapidly online, persisting even after police rectified the misinformation.
The Institute for Strategic Dialogue revealed that by the afternoon of July 30, the day after the attack, a false identity associated with the supposed asylum seeker had been mentioned over 30,000 times on X by more than 18,000 distinct accounts. The ISD noted, “The erroneous name was disseminated organically but also promoted by platform algorithms.”
“Platforms thus exacerbated the spread of misinformation to users who might not otherwise have encountered it, despite police confirmation of its inaccuracy,” stated the ISD.
The UK government suspects that bots, potentially state-backed, may have contributed to the amplification of falsehoods.
Addressing ‘online criminality’
Social media companies have long grappled with enforcing policies against hate speech and incitement to violence. “The issue has always been enforcement,” Isabelle Frances-Wright, a technology expert at the ISD, told CNN. “During crises and conflicts, when there’s a surge of content, their already fragile moderation systems seem to collapse.”
Compounding the problem, Musk himself has endorsed incendiary content on X, a platform recently criticized by European regulators for misleading users. If he can engage in such behavior, why not others?
For instance, shortly after the Hamas attack on Israel and the subsequent Gaza conflict on October 7, Musk, a self-proclaimed “free speech absolutist,” publicly supported an antisemitic conspiracy theory favored by White supremacists. He later apologized for what he described as his “dumbest” social media post ever.
Under Musk’s stewardship, X has also loosened content moderation policies and reinstated several previously banned accounts, including far-right figures like Tommy Robinson, who has been actively fueling the UK protests while condemning violent acts.
This week, the UK government has vowed to prosecute “online criminality” and urged social media companies to combat the spread of misinformation. “Social media has supercharged… not just misinformation but also the incitement to violence,” stated UK Home Secretary Yvette Cooper.
“That is utterly disgraceful, and we cannot continue in this vein,” she declared on BBC Radio 5 Live, adding that law enforcement will target both “online criminality” and “offline criminality.”
During a cabinet meeting Tuesday, UK Prime Minister Keir Starmer asserted that those implicated in the riots, both physically and online, “will face the full extent of the law and receive prompt justice,” as reported by CNN.
Peter Kyle, the science and technology minister, emphasized the need for social media companies to act decisively to halt the spread of hateful disinformation and incitement.
X, Meta (Facebook’s parent company), and TikTok have yet to respond to CNN’s inquiries.
It remains uncertain whether the UK government possesses the mechanisms to hold social media platforms accountable for their role in the unrest. The UK’s Online Safety Act, enacted last year, imposes new responsibilities on social media platforms, including the removal of illegal content.
The Act also criminalizes posting false information online “intended to cause significant harm.” However, the legislation is not yet operational, as the regulator Ofcom is still finalizing codes of practice and guidance.
Ofcom stated on Monday that combating illegal online content is a “major priority” and anticipates that the initial set of duties under the new Act regarding illegal content will be implemented “by the end of this year.” Once the law is active, Ofcom will have the authority to fine companies up to 10% of their global revenue.
For more latest news checkout our website: latestglobalinsight