Why are social media companies so reluctant to accept the harm caused by allowing users to set up anonymous accounts?
The use of anonymous Twitter accounts was considered by a committee in the House of Lords last week, which examined how the social media giant’s “practices and policies affect UK democracy”.
Katy Minshall, Twitter’s Head of UK Government, Public Policy and Philanthropy Democracy, appeared before the Digital Technologies Committee and was asked to explain why Twitter makes it so easy for users to hurl abuse or spread disinformation while hiding behind anonymous accounts.
Ms Minshall offered the standard Twitter defence, including a couple of case studies of benign uses of anonymity. But, the committee was unconvinced, with Baroness Morris of Yardley telling her: “There’s way more abuse comes through anonymous users than the honourable causes you mention… that cannot be the reason you’ve got this policy across the board”.
She was right to be sceptical. This week, my campaign organisation, Clean up the Internet, published a new report demonstrating how anonymity and pseudonymity fuel abuse and misinformation on social media platforms such as Twitter. To counter this, we propose a number of practical steps which platforms could introduce immediately, balancing the curtailing of toxic behaviour with the benefits of anonymity.
Our opinion polling suggests that, in wanting social media companies to change their approach to anonymity, the Lords are much more in touch with public opinion than is Twitter.
YouGov found that more than eight in 10 of the British Public (83%) think that the ability to post anonymously makes people ruder online, while three-quarters (76%) do not believe that social media companies are doing enough to protect users from anonymous abuse. These sentiments cut across Leavers and Remainers, Conservative, Liberal Democrat and Labour voters.
So, why are social media companies so resistant to tackling this issue?
Partly it stems from an almost religious hostility to any regulation. Of late, Silicon Valley’s antipathy to any accountability has been given something of a re-brand. These days, both Facebook and Twitter will say that they welcome “smart regulation”, but they almost immediately pivot to highlighting the risks of “unintended consequences”, “perverse incentives” and the dangers of an individual government acting alone. Companies who usually espouse a spirit of ‘move fast and break things’ suddenly come over all cautious.
Social media companies have always opposed anything that could introduce friction into their operating model – whether this is asking someone to demonstrate their profile is authentic, or putting any meaningful barriers between a young person and unsuitable content. One of Twitter’s more implausible claims to the House of Lords Committee was that it didn’t want to request more data from its users as part of any verification process because it believes in “data minimisation”. That’s a curious claim from a firm which relies on the monetisation of user data as its business model – but what is true is that an upfront, transparent verification process isn’t Twitter’s preferred method of trawling our data. It much prefers behavioural data, harvested silently as we use the platform, which can then be utilised to deliver more profitable adverts.
Silicon Valley clings to a world view which emphasises quantity and downplays the relevance of quality. Before the committee, Twitter repeatedly quoted statistics of the numbers of users, the volume of comments, the number of clicks. ‘All engagement is good’ is a deep-rooted tenet. Viewed with this ‘engagement gaze’, all users are good for the stats – regardless of whether or not they are authentic and regardless of how responsibly they behave. And, as we all know, in a world of free social media, users are not customers – we are the product: to be packaged and sold to advertisers.
Twitter’s unyielding approach to anonymity suggests that we are unlikely to see voluntary change any time soon. In light of this, the Government’s ‘online harms’ agenda is a key opportunity to secure change. Progress has been slow, and there remain many gaps and missing details in the Government’s proposed approach. However, we believe that the introduction of a duty of care on social media companies, if properly formulated, could provide a mechanism for holding them to account for the social impact of key decisions, such as how they responsibly manage anonymity on their platforms.
The laissez faire approach to anonymity favoured by the social media companies leads to significant harm, in terms of abuse and the spread of disinformation. But this isn’t inevitable or an accident – alternative approaches to managing anonymity exist, should the companies choose to adopt them. A ‘duty of care’ should hold the companies to account for their decisions and require them to consider factors beyond their own ideology and short-term profits. Our independent polling suggests that the public are growing impatient for such rules to be introduced.
Stephen Kinsella is the founder of ‘Clean Up the Internet’