Free from fear or favour
No tracking. No cookies

‘Free Speech for the 0.1%: Why the Online Safety Act Is Failing’

This broken law isn’t only failing to prevent the spread of hate and misinformation, it’s actively protecting the ability of the most powerful and privileged to do so, argues Kyle Taylor

Elon Musk’s X Platform has continued to propagate hatred and misinformation in the UK despite the Online Safety Act

Support our mission to provide fearless stories about and outside the media system

Go to the Digital and Print Editions of Byline Times

Packed with exclusive investigations, analysis, and features

A new UK parliamentary report has confirmed what many of us in civil society have been saying for years: the Online Safety Act isn’t working. But it’s not just failing to stop harmful misinformation and abuse — it’s failing to even understand how those harms spread.

Here’s the crux of it: 0.1% of users are responsible for 80% of the disinformation circulating online. That’s not a free speech problem — that’s a virality problem. And yet, our laws treat every user, every post, and every platform interaction as if it carries equal weight and risk, carving out special exemptions for those users who are most likely to share demonstrably false information. Pretending this isn’t the case is why the Act is falling short.


The Illusion of Equal Speech

The idea that all users are created equal online is seductive — and yet completely detached from reality. In practice, influence on digital platforms is wildly unequal. A tiny minority of accounts generate the vast majority of views, shares, and clicks. These users aren’t just louder — they’re structurally amplified by algorithms that reward outrage and attention.

But the Online Safety Act doesn’t account for any of that.

It applies obligations to platforms based on content type, not content impact. A hate-fuelled post seen by ten people is treated the same as a near-identical post seen by ten million just because the latter was posted by a prominent political figure or commentator. And that’s before you account for the fact that some of the most harmful content comes from voices the Act goes out of its way to protect.

UPDATE

Keir Starmer’s Government Still Hasn’t Reviewed Use of Musk’s X Despite Platform ‘Amplifying Hate Speech and Misinformation’

Downing Street continues to refuse to explore communicating through alternative platforms, like Bluesky, despite the role of X in last summer’s violent disorder

The result is a law that fails to distinguish between an anonymous troll with 11 followers and a tabloid with a multimillion-person reach — despite the fact that one is far more likely to go viral and do real-world harm.

This isn’t about whether someone can say something online. It’s about whether those “free speech” rights extend to the structural incentivising and algorithmic boosting for maximum reach and, by extension, maximum profit. That’s what these platforms do best. That’s also what the law ignores.

Instead, the Act leans heavily on individual content moderation. But that kind of whack-a-mole approach is hopeless against modern information ecosystems. The real threat lies in the systems themselves that allow, and even encourage disinformation, hate speech, and conspiracies to spread at scale.

The most viral content online isn’t necessarily the most true, the most thoughtful, or the most important. It’s the most clickable — and often the most harmful. The Online Safety Act should recognise that amplification is what turns harmful ideas into societal threats. But it doesn’t. And that’s a systemic failure.


A Two-Tier Internet

Worse still, the Act doesn’t just overlook structural power — it actively entrenches it. Even the completely insufficient rules in the Act still have three key carve-outs which create what can only be described as a two-tier internet:


Together, these carve-outs mean that disinformation spread by a tabloid newspaper or an MP enjoys greater protection under the law than a legitimate concern raised, however carelessly, by an ordinary citizen. Add to that the fact that paid advertising — including political ads — is almost entirely excluded from its scope, and it becomes painfully clear that this law isn’t abut protecting free speech. It’s about protecting privileged speech.

Take the now-infamous example of the Mail on Sunday suggesting Angela Rayner crossed her legs to distract the Prime Minister. Under the OSA, if a Twitter user posts a misogynistic comment echoing that claim, it could be subject to takedown. But if the Mail publishes it and their post goes viral? Platforms are expected to leave it up — no matter the impact.

Or consider the “Great Replacement” conspiracy theory. If posted by an average user, it could be taken down. But if that same rhetoric comes from a politician, a partisan commentator, or a news outlet — and goes viral — the Act effectively shields it. That’s not freedom of speech. That’s state-sanctioned amplification of harmful content by the people already most able to cause harm.

‘How the Green Party Should Respond to Jeremy Corbyn and Zarah Sultana’s New Left Party’

The creation of a new explicitly left party means that any attempt by the Greens to compete on the same ground is now a dead end, argues Rupert Read


Systemic fixes, not content moderation

So what’s the alternative?

We need to shift from moderating content to regulating systems. Instead of trying to police every post, regulators should focus on how platforms are designed to maximise reach, engagement, and outrage — and how those systems respond to harmful content once it begins to spread.

One potential answer? Virality thresholds. Rather than implementing mitigation measures ad-hoc, we should intervene once it crosses a defined threshold of attention — say, 10,000 shares, likes, or comments. At that point, platforms could:

This approach preserves free expression at the point of publication but constrains its public danger once it starts to spread. It’s responsive, proportionate, and rooted in the actual mechanics of online harm. The post itself is never taken down.

Some platforms claim to have done this in limited ways. Facebook has tools to throttle content that goes viral too quickly. Twitter (now X) has piloted prompts before sharing. But without legal mandates, these tools are inconsistently applied and easily gamed. What’s needed is a systems-first framework — not more exemptions for the loudest voices.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.


Reframing the debate

The Online Safety Act was supposed to be “world-leading.” In truth, it’s become a case study in regulatory capture — protecting the voices most likely to incite harm under the guise of defending freedom.

But here’s the truth: free speech without accountability is just privilege. When the most powerful voices online are exempted from scrutiny, and when their speech is amplified algorithmically without oversight, we don’t get more democracy — we get more disinformation.

A better model is possible. One where freedom of expression is protected equally — not tiered by wealth, reach, or political power. One where we regulate the spread, not the speech. And one where virality, not volume, triggers action.

It’s time we stop pretending this is about individual censorship. This is about regulating systemic amplification to minimise the likelihood of harm at scale. We need to start viewing this as a public health issue or else things like nationwide riots sparked by demonstrably false information will become our new normal. That’s how you preserve rights while reducing harm.


Written by

This article was filed under
,