Free from fear or favour
No tracking. No cookies

An Online Harm in Plain Sight: Social Media and Child Sexual Exploitation

With attempts to access child sexual exploitation material exploding during Coronavirus lockdowns, Katherine Denkinson reports on how big tech companies are not doing enough to protect children

Photo: Jaap Arriens/Sipa USA

An Online Harm in Plain Sight Social Media & Child Sexual Exploitation

With attempts to access child sexual exploitation material exploding during Coronavirus lockdowns, Katherine Denkinson reports on how big tech companies are not doing enough to protect children

Kearee is a 22-year-old trainee teacher. She is also a survivor of child sexual exploitation and the unofficial leader of a Twitter group campaigning to remove paedophiles from the social media platform.

Twitter has long had an issue with the so-called ‘minor-attracted person’ community. Because this community promises never to physically harm a child, the platform has controversially refused to remove individuals from the site. 

In these online networks, so-called minor-attracted people can discuss their ‘sexual preferences’ – including age and the gender of children. 

Now, Byline Times can now reveal how people engaging in online child sexual exploitation use hashtags, social networks, Bitcoin and file-sharing to share images and videos of child abuse – and how victims and survivors like Kearee have been forced to take matters into their own hands.

FEARLESS, INDEPENDENT JOURNALISM & INCREDIBLE VALUE

Receive the monthly Byline Times newspaper and support quality, investigative reporting.


Hashtags and Downloads

In order to remain under the radar, sellers of child sexual exploitation material operate via a number of coded hashtags in a number of different languages and phrases.

Customers seek out these hashtags and related accounts. They are then redirected to the messaging app Telegram where videos of child sexual exploitation can be bought with Bitcoin. From there, the videos are transferred via Megalinks, a file-sharing system similar to Dropbox, primarily used by adult performers. It has recently become synonymous with the sale of child sexual exploitation media.

Twitter is not the only link in the chain. Instagram has an equally large number of accounts using the same system to promote Telegram channels, Megalink sales and images of child exploitation.

Child abuse in the UK has soared during the Coronavirus lockdown, with the Internet Watchdog Foundation (IWF) logging more than eight million attempts to access images of child abuse online. 

In 2013, keen to tackle the appearance of child abuse material on its site, Twitter advertised its commitment to using a system known as PhotoDNA. The technology works by creating a digital ‘hash’ on images which contain images of child abuse. Once an image is flagged in this way it should not be able to be posted or re-posted. 

However, a 2019 IWF report said that Twitter was responsible for almost half of all child abuse imagery posted online. It is still unclear why its PhotoDNA system has failed so dramatically.


A Network of Survivors

Kearee was only 11 when she was groomed online by an adult male who claimed to be two years older than her.

“One day…[he] started threatening and blackmailing me to get me to send nude pictures and videos,” she told Byline Times. “He said he knew where I went to school [and] would get me in trouble with the police or my parents… at 11 you believe that s**t.”

Eventually, Kearee told her parents about the abuse. Today, she says she is still “terrified of being on a webcam”, has had “nightmares for ages” and struggles to forget that it “wasn’t my fault”.

But more than a decade since Kearee’s experience of child sexual exploitation online, social networks such as Twitter and Instagram are providing a space for men like her abuser to use hashtags and coded messages to build a community. 

Campaigners such as Kearee have tried to ensure that accounts and images are suspended, but the systems for reporting these accounts are woefully inadequate. 

Twitter regularly points out the numerous ways it is has made the site safer. For example, users can block and mute accounts they find objectionable at the touch of a button. In contrast, reporting child sexual exploitation material is more arduous – requiring users to visit the official Twitter website, find the appropriate page, fill out a form and wait. When users report members of the so-called ‘minor attracted person’ community, often nothing is done and they are found not to be in violation of the rules.

Worse, survivors who are speaking out against the behaviour of the so-called minor attracted persons community have been met with attacks and further abuse. One anonymous troll wrote: “I bet you enjoyed letting paedophiles see your body.”


Time for Action

Twitter told Byline Times that it has a “zero-tolerance policy for child sexual exploitation content”.

A spokesperson said that the social media platform “aggressively fights online child sexual abuse and has heavily invested in technology and tools to enforce our policy”. They said that “dedicated teams” are working to “remove content, facilitate investigations, and protect minors from harm – both on and offline”.

The vast majority of “bad faith actors” are “detected proactively through our internal tools and technology”, they added.

Twitter also told Byline Times that it discloses information about the accounts it removes every six months as a part of the Twitter Transparency Report, and reports all incidences of child abuse material to the National Centre for Missing and Exploited Children. It also pointed out that it is a founding member of the Thorn Technical Task Force, which addresses the sexual exploitation of children. 

A spokesperson for Instagram’s parent company Facebook said that “any content that endangers children is horrific and we’re committed to doing everything we can to keep it off our apps”.

“We remove this content and the accounts that share or solicit it as soon we become aware of it, and are continuing to invest in technology to catch it faster,” a spokesperson told Byline Times. “We also just announced several new features to keep young people safe on Instagram, including preventing adults from messaging teens who don’t follow them.”

Despite this, a cursory Twitter search reveals that there are still numerous accounts promoting and selling child sexual exploitation material. It is also still possible for adults to message unknown teens on Instagram. 

Campaigners like Kearee now hope that the Government’s Online Harms Bill, which would make social media companies legally responsible for hosting dangerous imagery and other safeguarding failures, could bring about change. It would allow major social media platforms to be held to account when they fail to remove child abuse imagery. The bill would also ensure that users of social media platforms do not have to traumatise or re-traumatise themselves by looking at images of child abuse in order to report and have these removed.

Kearee and other survivors of child sexual exploitation online are now safe from their perpetrators – but while these networks remain active, that safety is not possible for the children whose exploitation and rape is currently being sold via Twitter. 



This article was filed under
, , ,