Free from fear or favour
No tracking. No cookies

‘I Was Targeted by a Hostile Crypto Spambot Swarm That Revealed How X’s Algorithm Has Been Truly Broken’

The automated block attack tricks the algorithm into thinking your account should be suppressed, reports Iain Overton

Elon Musk, pictured speaking to the media in the Oval Office of the White House, in February 2025. Photo: Aaron Schwartz/CNP /MediaPunch

Support our mission to provide fearless stories about and outside the media system

Packed with exclusive investigations, analysis, and features…

Over the past week, my X account has been flooded by a barrage of spam replies. Dozens of accounts with Bored Ape Non-Fungible Token profile pictures and barely coherent messages tagged me in praise of a supposed crypto trader named @GavinBrookswin. 

Their syntax was robotic, their engagement artificial. But the impact wasit increasingly became clear all too real.

At first, I thought it was just another passing scam. But then I noticed something odd: after instantly replying to my posts, these bot accounts blocked me.

This combination reply and block happened again. And again. And again. It was not random; it seemed to be strategic.

Then Phil Magness, an economic historian, dropped into my feed to give some advice. It had happened to him too, and what he said raised my suspicions. This wasn’t about selling crypto. It was about something darker. 

“The bots reply to you, then block you,” he wrote. “This triggers the algorithm to think that your content is being mass blocked and should be deboosted.”

It is hard to know if this is true, but what I did see was the effect. Deboosted it was. My engagement trickled off: likes receded, retweets faltered.

The intent of such manipulation is unclear, but I was being buried by the platform’s own design. With so many spam accounts blocking me, I became a digital persona non grata. 

‘Total Information Collapse’ and the Tribunal of Truth

Peter Jukes, Co-Founder and Executive Editor of Byline Times, on the urgent need for media accuracy and why joining Impress, the independent press regulator, is the best way for us to uphold those values

Why they had chosen me, I do not know. Others they had chosen seemed to be blue ticks but not journalists. Some of those targeted complained they were victims of censorship; others saw it as an attempt to discredit them. Whatever the case, it seems a new way that social media is being gamed.

This is increasingly becoming the case. Dr Emma L. Briant, a visiting associate professor at The University of Notre Dame du Lac in the United States and an expert on information warfare and propaganda, told Byline Times that “a common form of X harassment is to leverage the time-wasting, exhausting process of victims blocking and reporting a swarm of accounts. An attacker wants to aggravate their victim as much as possible, forcing them to block.”

“Here the target is a journalist and the goal is algorithmic suppression,” she explained about my case. “Blocking an account reduces its reputation score, so this lowers the journalist’s visibility in the ‘For You’ feed.

X is enabling tools that could be deployed by anyone for algorithmic censorship

Dr Emma L. Briant, expert on information warfare and propaganda

It is, however, hard to prove intent in this case. As James Bachini, a block chain expert, told Byline Times: “Sybil attacks where bots imitate humans is a cat and mouse game where AI is pushing boundaries and enabling the scammers to scale out what works on any particular day. I’d suggest it is more of an unsolvable network wide problem rather than a politically motivated attack.” 

What is clear is that this form of digital manipulation sits within a wider malaise afflicting Elon Musk‘s social media platform. This week, Musk’s AI chatbot Grok also came under fire for injecting far-right conspiracy theories into unrelated conversations. Users asked innocuous questions about baseball or scaffolding, and Grok, responded with references to “white genocide” in South Africa.

When probed, Grok claimed it had been “instructed by my creators” to treat the genocide claims as real and racially motivated. These assertions align with views Musk has publicly espoused. It later admitted this was a “mistake” prompted by conflicting system prompts. But the damage was done. 

The incident revealed how Musk’s platforms are being programmed, consciously or not, to reflect some particular political world views, and to silence others.

Power, Visas, and the Politics of Exclusion: How Borders Are Being Used to Silence Dissent

Borders are now being used not just to prevent the passage of people, but of ideas too, argues Iain Overton

This convergence of AI disinformation and social media algorithm manipulation presents an existential threat to truth. If AI can be programmed to rewrite reality, and bots can be mobilised to dampen down voices that seek to report — accurately — reality, then the space for democratic accountability diminishes, and rapidly.

Meanwhile, it’s been revealed how foreign-based botnets are shaping online narratives about Syria’s new administration, sowing sectarian hatred and promoting fabricated stories. 

A BBC Arabic investigation earlier this month found more than 400,000 such posts, many emerging from bot accounts in Iran, Iraq, Turkey and Saudi Arabia.

In that light, my experience is not just a case of petty spamming. It is part of a larger, global trend that has been happening for a while, but — of late — appears to be supercharged: the weaponisation of prominent digital platforms against truth.

When social media platforms are vulnerable to bot attacks, and their algorithms punish those who are targeted, then power tilts towards the manipulators. 

How a ‘Populist’ Media Has Mainstreamed the Far Right

Right-wing daily papers in the UK do not represent ‘public opinion’ – they simply reflect the radical right views of those ‘who own and run them’, argues Julian Petley

When those platforms are owned by billionaires with declared ideological agendas, the tilt becomes more of a lurch. I tried to ask X questions about these agendas, but there is no way to contact X by phone, email, or DM directly, and their head did not reply to my questions when I messaged @elonmusk.

The bots that targeted me might have been deployed to reduce my voice. But what they really exposed is a system that enables suppression by design.

As the media theorists Tony D. Sampson, author of Virality: Contagion Theory in the Age of Networks, and Jussi Parikka, known for his work on media archaeology and digital culture, have argued, virality is rarely just a matter of content spreading smoothly through networks. It’s often the accidents — the breakdowns, glitches, and manipulations — that determine whether something goes viral or disappears.

In their analysis of network dysfunctionality, they note how algorithmic tweaks and unintended consequences frequently coalesce: small events, when nudged by the right interference, can either explode into influence or be quietly buried.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

As Sampson put it to Byline Times: “We’ve noted how quite often it is the tinkering with the tendency toward the accidents of contagion that triggers the potential to build small events into bigger events or not.”

The reply-and-block tactic I experienced seems just that — a calculated exploitation of X’s opaque systems, weaponising the unpredictable edge of virality to suppress rather than spread.

X, then, cannot be seen as a neutral platform. It is a weapon. And this week, it was aimed at me.


Written by

This article was filed under
, ,