Free from fear or favour
No tracking. No cookies

The Link Between Social Media Algorithms and Online Radicalisation

Social media algorithms show violent content to boys but that’s just the start of the problem according to a leading investigative journalist

Ofcom has said that that algorithms from all major social media companies have been recommending harmful content to children and “turning a blind eye” to problems. Photo: Alamy

Byline Times is an independent, reader-funded investigative newspaper, outside of the system of the established press, reporting on ‘what the papers don’t say’ – without fear or favour.

To support its work, subscribe to the monthly Byline Times print edition, packed with exclusive investigations, news, and analysis.

The founder of an investigative journalism company has detailed the link between social media algorithms pushing harmful content to young users and online radicalisation that can compel people to “take real-word action”.

In a viral thread on X on Monday, Bellingcat founder and creative director, Eliot Higgins, drew a connection between research he had conducted and revelations in a recent BBC Panorama investigation by Marianna Spring that showed how social media algorithms are exposing boys to violent content.

The BBC story focussed on 16-year-old Cai who told the broadcaster how the first video he saw on his social media feed was of a cute dog, but then “out of nowhere” he was recommended videos of someone being hit by a car, a monologue from an influencer sharing misogynistic views, and clips of violent fights.

You get the picture in your head and you can’t get it out. [It] stains your brain. And so you think about it for the rest of the day

Cai, on social media content

Andrew Kaung, who worked as an analyst on user safety at TikTok from December 2020 to June 2022, told the BBC that he was alarmed some teenage boys were being shown posts featuring violence, pornography, and promoting misogynistic views, whereas young girls were recommended content based on their interests.

Social media companies use AI tools to remove the majority of harmful content and to flag it for review by human moderators, irrespective of the amount of views it has received. However, the tools don’t catch everything.

Kaung told the broadcaster that when he worked at TikTok all videos that were not removed or flagged to moderators by AI – or reported by other users to moderators – were only reviewed when they hit 10,000 views.

UK Riots: How the Right-Wing Press Fought to Stop Laws to Combat Online Disinformation

The story of how the Conservatives exempted their media supporters from laws against spreading dangerously false information online

TikTok told the BBC that 99% of content it removes for violating its rules is scrapped before it reaches that milestone, and that it undertakes proactive investigations.

Kaung further explained that the algorithms’ fuel is engagement, regardless of whether users respond positively or negatively, and told of the frustration, and difficulty of bringing about change while working at both TikTok and Meta, Facebook’s parent company.

We are asking a private company whose interest is to promote their products to moderate themselves, which is like asking a tiger not to eat you

Andrew Kaung, former TikTok employee

UK regulator, Ofcom, told the BBC that algorithms from all major social media companies have been recommending harmful content to children and “turning a blind eye” to problems.

In 2021, a study by Columbia University highlighted how social media can also provide platforms for “bullying and exclusion, unrealistic expectations about body image and sources of popularity, normalisation of risk-taking behaviours, and can be detrimental to mental health” amid a plethora of other problems.

While concerns about the damaging impact of social media has been raging for several years, Higgins has connected the dots to reveal how content can unlock even greater societal problems.

‘How the Conservative Party Helped Cause the Riots and why Their Islamophobic Rhetoric Won’t Change’

Ironically, the riots during a leadership election give the party a unique opportunity to turn a page and turn their back on Farage-type populism. The signs are they will not take that opportunity

Higgins said Cai’s story is a “clear example of how online radicalisation” often begins with “seemingly harmless” content quickly escalating “because algorithms prioritise engagement over user safety”.

Social media platforms use algorithms, he explained, to keep users engaged, meaning users viewing somewhat neutral content “can suddenly find themselves immersed in more extreme or harmful material”.

This, Higgins wrote, creates “pathways”, with content getting progressively worse and opening the door to a “pyramid of radicalisation” that can lead to those at the top feeling “compelled to take real-word action, which can include violence”.

“Imagine someone who starts by googling ‘are vaccines safe’ and ends up burning down 5G towers because they think they’ll activate the microchips Bill Gates supposedly put in vaccines,” he wrote, adding: “This is the kind of real-world impact that can result from online radicalisation.”

Higgins added that the problem wasn’t just the content being recommended, it was the communities that content creates: “As users consume more extreme material, they often find themselves in online groups that reinforce and amplify these views, creating a powerful feedback loop.”

These communities provide a sense of belonging and validation, which can be very appealing, especially to those who feel alienated or distrustful of mainstream narratives. This is where radicalisation really takes hold—through community and interaction, not just through content

Eliot Higgins, Bellingcat

Higgins went on to explain how algorithms act as “gatekeepers”, deciding what content users and communities see, and in doing so, “can unintentionally guide young users down paths toward more radical thinking by constantly feeding them content that fuels outrage or excitement”.

He said Cai’s experience, as related to the BBC, “is a perfect example of this phenomenon”.

The journalist said a comprehensive approach was needed to combat content that corrupts young people, starting with better transparency around algorithms, better moderation, and better education to help users evaluate and navigate content so they can recognise when they’re being manipulated.

Radicalisation online isn’t just about the content—it’s about the digital environment that platforms create, where extreme views can spread unchecked. This is why understanding and addressing the role of algorithms and online communities is so important

Eliot Higgins, Bellingcat

Higgins said it is “up to all of us” to ensure social media platforms prioritise user well-being over engagement metrics.

Read the BBC article here.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.


Written by

This article was filed under
,