Byline Times is an independent, reader-funded investigative newspaper, outside of the system of the established press, reporting on ‘what the papers don’t say’ – without fear or favour.
To support its work, subscribe to the monthly Byline Times print edition, packed with exclusive investigations, news, and analysis.
The founder of an investigative journalism company has detailed the link between social media algorithms pushing harmful content to young users and online radicalisation that can compel people to “take real-word action”.
In a viral thread on X on Monday, Bellingcat founder and creative director, Eliot Higgins, drew a connection between research he had conducted and revelations in a recent BBC Panorama investigation by Marianna Spring that showed how social media algorithms are exposing boys to violent content.
The BBC story focussed on 16-year-old Cai who told the broadcaster how the first video he saw on his social media feed was of a cute dog, but then “out of nowhere” he was recommended videos of someone being hit by a car, a monologue from an influencer sharing misogynistic views, and clips of violent fights.
Andrew Kaung, who worked as an analyst on user safety at TikTok from December 2020 to June 2022, told the BBC that he was alarmed some teenage boys were being shown posts featuring violence, pornography, and promoting misogynistic views, whereas young girls were recommended content based on their interests.
Social media companies use AI tools to remove the majority of harmful content and to flag it for review by human moderators, irrespective of the amount of views it has received. However, the tools don’t catch everything.
Kaung told the broadcaster that when he worked at TikTok all videos that were not removed or flagged to moderators by AI – or reported by other users to moderators – were only reviewed when they hit 10,000 views.
TikTok told the BBC that 99% of content it removes for violating its rules is scrapped before it reaches that milestone, and that it undertakes proactive investigations.
Kaung further explained that the algorithms’ fuel is engagement, regardless of whether users respond positively or negatively, and told of the frustration, and difficulty of bringing about change while working at both TikTok and Meta, Facebook’s parent company.
UK regulator, Ofcom, told the BBC that algorithms from all major social media companies have been recommending harmful content to children and “turning a blind eye” to problems.
In 2021, a study by Columbia University highlighted how social media can also provide platforms for “bullying and exclusion, unrealistic expectations about body image and sources of popularity, normalisation of risk-taking behaviours, and can be detrimental to mental health” amid a plethora of other problems.
While concerns about the damaging impact of social media has been raging for several years, Higgins has connected the dots to reveal how content can unlock even greater societal problems.
Higgins said Cai’s story is a “clear example of how online radicalisation” often begins with “seemingly harmless” content quickly escalating “because algorithms prioritise engagement over user safety”.
Social media platforms use algorithms, he explained, to keep users engaged, meaning users viewing somewhat neutral content “can suddenly find themselves immersed in more extreme or harmful material”.
This, Higgins wrote, creates “pathways”, with content getting progressively worse and opening the door to a “pyramid of radicalisation” that can lead to those at the top feeling “compelled to take real-word action, which can include violence”.
“Imagine someone who starts by googling ‘are vaccines safe’ and ends up burning down 5G towers because they think they’ll activate the microchips Bill Gates supposedly put in vaccines,” he wrote, adding: “This is the kind of real-world impact that can result from online radicalisation.”
Higgins added that the problem wasn’t just the content being recommended, it was the communities that content creates: “As users consume more extreme material, they often find themselves in online groups that reinforce and amplify these views, creating a powerful feedback loop.”
Higgins went on to explain how algorithms act as “gatekeepers”, deciding what content users and communities see, and in doing so, “can unintentionally guide young users down paths toward more radical thinking by constantly feeding them content that fuels outrage or excitement”.
He said Cai’s experience, as related to the BBC, “is a perfect example of this phenomenon”.
The journalist said a comprehensive approach was needed to combat content that corrupts young people, starting with better transparency around algorithms, better moderation, and better education to help users evaluate and navigate content so they can recognise when they’re being manipulated.
Higgins said it is “up to all of us” to ensure social media platforms prioritise user well-being over engagement metrics.
Read the BBC article here.
ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE
Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.
We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.