Byline Times is an independent, reader-funded investigative newspaper, outside of the system of the established press, reporting on ‘what the papers don’t say’ – without fear or favour.
To support its work, subscribe to the monthly Byline Times print edition, packed with exclusive investigations, news, and analysis.
On June 29, the Australian Broadcasting Corporation (ABC) published the results of an investigation which revealed “foreign interference in the UK election”.
ABC had been analysing five apparently unrelated UK Facebook pages, ranging from the Reform-supporting ‘Patriotic UK’ to ‘BeyondBorders UK’, which promised “Stories of Hope, Unity, Resilience and Belonging”.
These and three other Facebook sites – ‘Commonsense Britain’, ‘British Patriots’ and ‘BritBlend’ – were run by administrators mostly based in Nigeria, and ran Facebook ads paid for using Nigerian currency, ABC found. Its investigation concluded that this bore the hallmarks of a Russian operation.
Russian interference in the UK’s democratic process is nothing new. Political donations and influence aside, twitter, now known as X, accounts linked to the Russian Internet Research Agency (IRA) spread fake news and promoted xenophobic messages in the aftermath of the 2017 UK terror attacks, while accounts at the extreme-end of right-wing echo chambers were routinely targeted by the IRA to gain traction via retweets, reported the Centre for Research and Evidence on Security Threats.
In 2022, Spanish researchers published findings showing “Russia’s growing use of cyber operations combined with (dis)information to foment or exacerbate tensions between government and society and/or among different societal groups” in France, Germany, and the UK.
In 1974, Elisabeth Noelle-Neumann, who had previously worked for Nazi newspapers, published the theory of the ‘Spiral of Silence’. This is the syndrome whereby opinions expressed by a loud minority ultimately take over and give the impression that they themselves are, in fact, the norm.
How Bots Work and How Widespread They are
Few people realise, however, how such “hostile” campaigns are conducted across the spectrum of social media, and how far the use of malicious bots, and other automated programmes, to spread these campaigns, reaches. Or what effect it can have.
Governments being shown to run such propaganda operations have, in the past, used a combination of human operatives and computer programming. In Azerbaijan, Israel, Tajikistan and Uzbekistan, as well as in Russia, student or youth groups are hired by government agencies to use computational propaganda, reported researchers for Oxford University. Young people are also employed in Russian and Chinese troll farms, while in Mexico, journalist Peter Pomerantsev found that young people acting as “cyborgs”, and “bot herders”, along with fully automated social media personas, formed the backbone of President Enrique Peña Nieto’s successful 2012 election campaign.
In the UK, the Government Communications Headquarters (GCHQ) are now “broadening their recruitment base” to deal with increased demand for cyber actions, recruiting in schools in areas of socio-economic deprivation. The Intercept reported that GCHQ “has developed covert tools to seed the internet with false information, including the ability to manipulate the results of online polls, artificially inflate pageview counts on web sites…and plant false Facebook wall posts”.
CAPTCHAs (and re-CAPTCHAs) are the computer-automated tests designed to distinguish robots from humans, used for security reasons; for example, as a protection to stop the creation of fake on-line accounts. Computer scientists developed machine-learning programmes which solve them.
In tandem, “captcha farms” are multi-million dollar businesses described as “digital sweatshops”, based in economically-deprived countries, where human employees solve CAPTCHAs for a rate of $0.17 (£0.13) per 1000 CAPTCHAs solved.
It is, reported InfoSecurity magazine “a Nigerian-fraudster-style of economy with people effectively working along with the malicious bots in order to overcome human challenges. The bots are actually passing off this work to a human”.
The Threat From the Far-Right
Spanish researchers, in a comprehensive study of the way in which far-right groups network and circulate hate on Facebook, point out that the largest growing threat is from right-wing extremists and hate groups, and argue that online, “hate practices are a growing trend and numerous human rights groups have expressed concern about the use of the Internet – especially social networking platforms – to spread all forms of discrimination”.
The targets of this hate are, as UN Secretary-General Antonio Guterres put it, “any so-called other”. In order for such hate to reach the necessary prominence in public media and discourse, it must be, firstly, visible. This is achieved by targeting prominent celebrities, together with politicians, journalists, lawyers and anyone else who might be expected to speak for the “so-called other”. Racist abuse predominates and women in power are repeatedly targeted. A 2017 UK survey by Amnesty found that Asian and black women MPs received 35% more abusive tweets than white women MPs.
However, the fact that hate is being, as the UN put it, “weaponised for political gain” is not a sign that this is, as they say, not ” the loud voices of a few people on the fringe of society”. On the contrary, the “weaponisation” consists exactly of reinforcing and amplifying these voices, forcing them into the mainstream as a result.
Researchers at Carnegie Mellon University performing a preliminary examination of over 200 million tweets discussing the coronavirus found that about 45% were sent by accounts more resembling “computerised robots” than humans; for example, tweeting more than would be humanly possible. Collating reports of such activities worldwide, which is rarely if ever done, leads to an overwhelming picture, regardless of the fact that the numbers are almost certain to be underestimates. “Almost all bad bots are highly sophisticated and hard to detect” researchers have found.
Nevertheless, at least 60% of the tweets about the 2018 Central American refugee caravan, which saw thousands of migrants making their tortuous way through Mexico to the US border, were estimated to be by bots, which had evolved from “simply sending automated tweets that Twitter might delete” to working to “amplify and spread the divisive tweets written by actual humans”.
In 2018, the Anti Defamation League found that between 30% to 40% of accounts regularly tweeting hatred against Jewish people were likely to be bots. In total, according to the ADL report, they produced 43% of all anti-Semitic tweets. The report, which came out the day before 11 people were murdered in a shooting at a Pittsburgh synagogue, concluded that political bots were “playing a significant role in artificially amplifying derogatory content over Twitter about Jewish people”.
The individuals behind the bots remained unknown. Katie Joseff, from the Digital Intelligence Lab, who co-authored the report, said that anyone could be behind them. “It wouldn’t be at all out of the realm of question for Nazis or anyone on the alt-right to be able to use bot accounts” she said. “They are very accessible, and people who just have normal social media followings, or even high schoolers, know how to buy fake accounts”.
In Indonesia in 2019, the BBC reported that any account using the hashtag #FreeWestPapua, representing the campaign for independence from Indonesian annexation, was immediately flooded by automated messages promoting the Indonesian government. The same X bots also targeted Veronica Koman, an Indonesian human rights lawyer, with rape and death threats.
In Finland, a similar hate campaign was launched against a journalist who, ironically, had broken the story about the pro-Kremlin propaganda machine operating through X bots and bot networks.
One of the most active accounts spreading “anti-Muslim hate” in the UK in 2017 was one of thousands of accounts subsequently determined to be a fake, created in Russia. It had also spread pro-Brexit messages. The X account of “Jenna Abrams”, who tweeted anti-Muslim and anti-feminist hate to over 70,000 followers, was revealed as another bot.
In fact, one-third of the X traffic regarding the UK’s Brexit referendum was generated by 1% of the accounts, a large majority being automated or semi-automated bots. Researchers also found evidence of a “massive” army of Japanese bots run by extremist right wing supporters of the successful right wing candidate Abe, which flooded social media with aggressive and hateful tweets during the 2014 election.
The Battle Against the Bots – And Fake Engagement
After the 2017 US Senate Intelligence Committee hearings, attempts to restrict “botnets” – inter-connected webs of accounts – resulted in over 117 thousand “malicious applications” and more than 450 thousand suspicious accounts being blocked. Malicious bots have been found operating across every topic, including climate change, where a quarter of tweets attacking both the science and Greta Thunberg in 2020, were found to be bots.
By regularly culling millions of fake (automated) accounts and hate posts, X and Facebook have made headlines, and given the impression that the issue of fakery and hate online is at least partially being dealt with. Most recently “a global network of fake accounts used in a coordinated campaign to push pro-Trump political messages” was deleted by both platforms. It bears repeating that very few people are aware of the real extent and reach of the programmes, and the extent to which digital spaces can be manipulated.
Public awareness can extend to the issue of “fake followers”; although people are less aware of how cheap and easy it is to purchase them – $50 for 2,500 “followers”, Hubspot found in 2019. Again, one of the most high-profile examples is Donald Trump, over 60% of whose X followers, in 2018, were estimated to be “bots, spam, inactive or propaganda”. Both governments and “legitimate human users” who are online proponents of hate speech have easy, cheap access both to buying thousands of followers, and using bots to retweet their own messages, or those of others, or each other.
Equally, very few people know that in 2019, Microsoft engineers at Beihang University and Microsoft China disclosed that they had developed a bot that reads and comments on online news articles. It is made of a reading network that “comprehends” an article and extracts important points, and a network which then writes a comment which is based on those points, and on the article’s title. “Our model can significantly out-perform existing methods in terms of both automatic evaluation and human judgment” say the authors.
It is important to note that there were “existing methods”; something of which even the few journalists who responded to Microsoft’s announcement seemed unaware. “Essentially” reported Vice “the paper is suggesting that a system that automatically generates fake engagement and debate on an article could be beneficial because it could dupe real humans into engaging with the article as well”. The researchers, Vice noted, left that statement out of the updated version of their report. Instead, they acknowledged that a bot which pretends to be human, and which comments on news stories, may pose some “risks”.
More from our VoteWatch24 Coverage
As the Irish Times pointed out, the code was now available on the free tech platform GitHub: “so, although Microsoft acknowledges it would be unethical to use this to deceive people, there is nothing stopping those with the technical know-how from doing so”.
Alongside this (and almost entirely unreported) are bots which can be used to upvote or downvote comments on a range of media platforms. The influential news and discussion platform Reddit exemplifies the problem, with dozens of sites offering the chance to buy automated upvotes or downvotes there. Researchers proved that it was both “easy and cheap” to maliciously manipulate posts and comments. Facebook and Disqus have also been linked to automatic bot votes.
The results showed that anyone with an agenda can secretly manipulate Reddit votes, boosting visibility and interaction – “a lightning-fast Reddit upvote service, speed up to 3000 upvotes per hour”, one site currently promises, for only $0.15 a vote.
In 2020, South Korean researchers examined one of South Korea’s main news portals, Naver News, and discovered more than ten thousand comment threads which were highly likely to have been manipulated. They found that co-ordinated manipulation in recent years had significantly increased.
The Hateful Media
Hiding in the open, the largest perpetrators of hate in the UK, online and off, are the press. The right-wing British media was “uniquely aggressive in its campaigns against refugees and migrants” reported the United Nations High Commission for Refugees in 2015. Irish travellers, Gypsies and Roma had also been the subject of focussed long-term attacks: most egregiously by the Sun, Daily Mail and Daily Express: all of which display prominent headlines in every UK high street.
While German newspapers were also found to be using dehumanising language which portrayed refugees, for example, as a “common threat”, the UN Human Rights Commissioner highlighted the “decades of sustained and unrestrained anti-foreigner abuse, misinformation and distortion” in the UK press, the most extreme of which was comparable to the language which incited the Rwandan genocide. In 2017, former UK Conservative minister, Baroness Sayeeda Warsi, called hate speech in the UK press a “plague…poisoning our public discourse…crowding out tolerance, reason and understanding “; in this case with Muslims the principal target.
Although physical sales across the mainstream press have been falling, what is little understood is these papers’ worldwide online reach. They are still thought of as “British” but in 2019 the Sun had a global readership of over 32 million monthly online; the Daily Mail and the Express around 25 million. Campaigning by “Stop Funding Hate” which persuaded companies, through consumer pressure, not to advertise on such platforms, seems to have had an effect, with the group recording a drop in anti-migrant front pages from over 100 in 2016 to zero in 2019.
Alongside the online reach, however, come the newspaper comments sections, which are meant to be regulated by the newspapers themselves and by the Independent Press Standards Organisation (IPSO). As campaign group “Hacked Off” report, the sections are instead a “Wild West” of unregulated inflammatory hate speech, where racist comments can receive hundreds or thousands of “upvotes” and remain on the site for months, if removed at all.
Comments on the MailOnline’s (Daily Mail) coverage of a fire at a refugee camp in Lesbos on 9 September 2020 demonstrate the fact that the hate speech of previous headlines had moved to below the line, where it had become even more virulent.
“Turning Europe into the same cess-pit they come from” read one top-rated comment. “These are the kind of S*** the UK is letting in” and “These people are from countries that are essentially cesspits and they seem to want to turn the world into the same hole they crawled out of” read others. The refugees are compared to “soldier ants”, “invaders” and “money grabbers”. A month after publication, the comments were still in place. The UK government, Hacked Off reported, intended to exempt newspaper comments sections from any Bill regulating the internet.
Two UK newspapers not cited by the UN in its denunciation of hate speech were the traditionally liberal, sometimes seen as left-wing, Guardian, and its Sunday paper, The Observer. Although producing headlines such as the entirely inaccurate “There’s a social pandemic poisoning Europe: hatred of Muslims” (2020) which can do little but spread fear and division among the communities it apparently aims to protect, the comment sections of both are well-moderated, and largely free of hate speech.
However, the Observer‘s coverage above, and its more recent coverage of the QAnon conspiracy illustrate a larger problem: the apparent inability, or unwillingness, by mainstream media to address computer-generated propaganda.
The QAnon conspiracy, described as “the Nazi cult rebranded” had, the Observer reported, in 2020, grown to “terrifying” levels in the UK and elsewhere: with membership on Facebook groups up by 120%; engagement rates up by 91%; and millions of tweets and posts using QAnon-related phrases and hashtags. “Britain is the second country in the world for output of Q-related tweets” reported the website Wired, basing its piece, as did the Observer, on a report by the Institute of Strategic Dialogue (ISD).
Tech investigators had found, two years previously, that QAnon was, from the beginning, artificially boosted by bots. Researchers at the Middlebury Institute of International Studies at Monterey had successfully programmed the latest in artificial intelligence neural networks to reproduce QAnon propaganda. The ISD’s report mentioned automatic bots once, in passing, with a reference to a report suggesting that Russian bots may have boosted the QAnon traffic on X; the Observer and Wired did not mention it.
Even in media which does not promote hate speech, the strategies behind, and opportunities for, artificial inflation of posts, tweets, hashtags and comments are unexamined. As yet, there has been no investigation comparable to that of the research into Naver News, on the sources and manipulation of hate speech in UK online comments, or into the upvoting of such comments.
“I popped onto the Mail yesterday. Crikey. There are people there who really do believe that what the country needs (is) an authoritarian government and less freedom. It’s s***-your-knickers scary stuff. It it gets four figure upticks” one Guardian reader was worrying at the beginning of this month. Meanwhile, over on Reddit, users were comparing ‘likes’ on their Daily Mail comments and all were finding that the names of apparent people voting their comments ‘up’ were clearly algorithmic – invariably a colour followed by a noun. Sample comment: “It’s clearly Bots, what are the Daily Mail up to?”
ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE
Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.
We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.
As yet, there has been no investigation comparable to that of the research into Naver News, on the sources and manipulation of hate speech in UK online comments, or into the upvoting of such comments.
In the meantime, the South Korean researchers had found that, indeed, “Readers tend to estimate public opinion based on those comments” – and also to change their own opinion in the face of it. If the majority in the UK are sucked into the “Spiral of Silence” on bot-promoted topics, whether anti-immigration, anti-Muslim, pro-nationalist or pro-authoritarianism, for example, it has obvious implications for the future.