Support our mission to provide fearless stories about and outside the media system
Packed with exclusive investigations, analysis, and features
Politicians should “feel empowered” to leave platforms like X – formerly Twitter – if they believe it presents a greater risk than benefit to their work, according to a damning new report from the House of Commons’ speaker, Sir Lindsay Hoyle.
The report includes the strongest nudge yet for politicians to consider leaving X or Facebook altogether.
The official House of Commons report from the Speaker’s Conference on abuse of politicians is damning about X, formerly Twitter, where toxic content appears to have got far worse since far-right billionaire Elon Musk took over in 2022.
While a spokesperson for X told the inquiry: “It is very important that people who use the platform are not victims of abuse” and that they take the issue seriously, the Speaker-led probe found that X was failing in its legal duties to remove threats of violence against politicians.
And the special conference from Speaker Lindsay Hoyle said X’s written evidence was “often more evasive than it was informative”. For example it noted that “when asked for the percentage of requests for information from the police they had denied due to insufficient evidence to support the request, X declined to say.”
ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE
Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.
We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.
But the Commons body did extract important information from the social media platform, long seen as a “public square” for political debate in the UK and beyond.
X has “only 1,486 human reviewers working in English” despite receiving “more than 85,000 posts that breach their terms of service every day” according to the platform’s own data.
It also found that X “shared information with the police in less than half of the instances when it was requested formally, and in only 30% of instances when it was requested informally” – compared to Meta’s 85% compliance rate.
Both Meta – which owns Facebook and Instagram – and X had a higher bar for removing abuse when it concerned politicians.
The report notes that Meta only removes “violent threats against public figures, such as MPs and candidates if they are found to be ‘credible’” – using a “high bar” including looking for “specific locations and timing of a potential attack”. Meanwhile, X’s enforcement considers whether content “may be a topic of legitimate public interest” – in other words, attacks on politicians are viewed as more acceptable than ordinary members of the public.
The critical assessment from the Speaker’s Conference concluded: “We have no faith that Meta and X will resolve these issues unless they are legally obliged to do so. There is therefore no point in us recommending wholesale change of policies or modus operandi.”
Don’t miss a story
Key concerns from the report found a general “unwillingness to remove abuse against MPs, candidates and other public figures”, and indications that X and Meta’s algorithmic recommendation systems actually give preference to incendiary and threatening content – because it boosts engagement on the platform.
Meta and X were also condemned for choosing to send “relatively junior witnesses” to the inquiry, suggesting a lack of respect for the process.
The report went on to question “their sincerity” – adding that the platforms do not appear to “understand the damaging impact they are having on democracy in the UK.”
The Commons inquiry noted that having higher bars for removing threats against politicians than other individuals “normalises the idea that MPs and candidates should be expected to endure a greater level of abuse than people in other professions: this impression puts people off running as candidates and is damaging democracy.”
In the inquiry’s survey of MPs, 97% of respondents said that they are active on Facebook, and 91% on X.
But MPs were significantly more likely to have experienced abuse on X (81%) and Facebook (77%) compared to Instagram (15%), despite the fact that similar numbers of respondents reported being active on the three platforms.
The report was clear that the main source of threats to MPs and candidates is Facebook and X. There is a slow but steady move of UK politicians away from X in the face of such abuse, as Byline Times has covered.
Perhaps most starkly, security services told the Conference that the two social media firms have become increasingly unwilling to remove abusive content even in the face of official requests.
The parliamentary security department (PSD) told MPs it now routinely refers only “directly threatening and grossly offensive material to platform[s]” as they know this is the only material that will be considered for removal.
A senior PSD official said that while platforms have previously been willing to work with the security bodies to remove abusive and intimidating content, they no longer have “any interest whatsoever.” Both Meta and X appeared to loosen moderation policies in the aftermath of Donald Trump’s presidential election.
The report criticised X and Meta for sending, respectively, their Government Affairs Director for Europe, and a Public Affairs Manager. When invited to give evidence to the Senate Judiciary Committee in the United States in January 2024 they sent their CEOs.
The Conference urged X to return to its old policy, currently still used by Meta, of deploying “automated messages to warn people when the content they are about to post may be abusive and contrary to the platform’s policies.”
While the report does not refer directly to the election of Donald Trump as a factor, it adds: “Meta and X appear to be increasingly unwilling to take down abusive or intimidatory content.”
Meta and X both denied that their platforms actively promote harmful content for commercial reasons, saying advertisers would not wish to be associated with violent content.
Meta’s policy is only to remove violent threats against public figures, such as MPs and candidates, if they are found to be “credible”. It decides what is a “credible” threat or not.
Professor Patricia Rossini, a political communications academic at the University of Glasgow, told the Conference: “We really need to consider these issues from more of a, “Does it matter if it’s credible?” standpoint” since abuse in politics was putting people off running for election.
Sir Lindsay Hoyle’s inquiry also raised concerns that, even when individuals were criminally prosecuted for posts inciting violence on X, they were allowed to remain active on the platform.
Moreover, when police request information about the identity of individuals who have posted content as part of an investigation, X often did not provide it. Meta provided data in response to 85% of the requests they received from police last year.
Researchers and regulators currently have to rely on the data the company is willing to share to assess whether abuse on platforms is getting better, which Professor Rossini compared to asking children, “Did you eat the chocolate or not?”.
“And we have to trust that whatever they say they did is true, because there is virtually no way for independent researchers to verify any of these things,” Prof Rossini said.
X told the inquiry that they have only 1,486 human reviewers working in English. The platform receives, by its own account, more than 85,000 posts which they decide breach their terms of service every day.
Overall, X hosts around 500 million new posts a day, in all languages. Evidence given by the platform to the inquiry said just 0.017% of posts are in violation of their policies. But MPs and monitoring groups have frequently reported that violent and abusive content is now rarely taken down following complaints.
A 2020 study found that around a third of posts on X are in English, which would – very roughly – suggest that there are at least 150 million new posts in English on X each day, to be moderated by around 1,500 English-language staff.
The report is likely to provide added momentum for politicians to quit X altogether. Elon Musk himself frequently posts far-right content, recently addressed Tommy Robinson’s ‘Unite the Kingdom’ rally where he appeared to incite violence, and has endorsed the BNP-like Advance UK party, a breakaway from Reform UK.
Private Eye reports this month that each of the “very large online platforms” investigated (from Meta, TikTok and YouTube to X and LinkedIn) were found to “provide financial incentives to peddlers of disinformation via their monetisation programmes.” Content on X was deemed both the most problematic and least credible, according to the Science Feedback study cited.
Got a story? Get in touch in confidence on josiah@bylinetimes.com




