Free from fear or favour
No tracking. No cookies

Elon Musk Has Already Built an Interplanetary Colony: It’s Called X

Caroline Orr explains how Trump has enabled the algorithmic capture of social media platforms, resulting in stealth censorship

Support our mission to provide fearless stories about and outside the media system

Go to the Digital and Print Editions of Byline Times

Packed with exclusive investigations, analysis, and features

For over a decade, Twitter served as a virtual town square where everyone from celebrities, journalists, and influencers, to politicians, entertainers, and heads of state congregated alongside the rest of us. 

Twitter became like the public record – a place for news, information, argument and dissent. It was real-time, fast-paced, and unparalleled in the world of social media. But whatever happened to what SpaceX founder Elon Musk called the “world’s town hall” when he took over Twitter, that open space no longer exists.. 

For many of us who used Twitter as a source of real-time information, transitioning to X was like moving to another planet. Instead of light, there was dangerous radiation. Instead of breaths of fresh air, toxic fumes. Instead of reaching out to fellow explorers, we were lost in space with huge time lags in transmission home.

Our old friends and associates had scattered in a desperate attempt to find a more hospitable home. We were alone in a strange land, and the man leading the entire thing seemed too caught up in memes and self-congratulatory tweets to pay any attention to the chaos over which he now reigned. 

In short, we got a trial run of what it was like to live on Mars – entirely captured by the whims of an oligarch. Looking back, it’s now clear that this was not an isolated event, but rather part of a larger takeover of our digital environment by money-hungry tech companies and powerful billionaires. By seizing control of the pipelines that deliver information, they are effectively holding us hostage from afar. This is the story of how it happened.

FREE PREVIEW

Trump’s America: The Fulcrum of a Global ‘Network War’ on Democracy

The re-elected President’s backers believe democracy is finished – the fight to save it must transform it


Unseen Curators

Most Americans think censorship is about banning books, deleting tweets, or deplatforming dissidents. And depending on where and who you are, this can indeed be a primary form of censorship. But in 2025 in America, the most widespread and insidious forms of censorship aren’t about takedowns or suppression of speech – rather, they’re about tuning the algorithm

Quietly, subtly, and almost always with plausible deniability, the Trump administration appears to be executing a playbook of what can best be described as reverse algorithmic capture: a process through which government pressure reshapes the architecture of digital platforms, not by direct control, but by engineering incentives that guide algorithms to amplify preferred narratives and suppress dissent. 

Of course, you don’t have to go to Mars, but if you don’t, we can’t promise that there will be enough food and water to sustain you. The choice is yours

Put differently, it’s the state using soft power to realign platforms’ invisible gates toward ideological conformity – but never giving up that element of distance and plausible deniability.

This strategy does not need to issue censorship orders or serve warrants to Silicon Valley tycoons. It works by shifting the terrain on which digital gatekeeping occurs. 

In July 2025, Trump signed an executive order titled Preventing Woke AI in the Federal Government. It requires that all artificial intelligence systems used by federal agencies be scrubbed of “ideological bias,” including references to diversity, equity, and inclusion (DEI), critical race theory, and other progressive frameworks. 

Although the order ostensibly applies only to federally procured AI systems, its ripple effects stretch much farther. Any company hoping to win government contracts must now conform to these ideological guardrails – not just in internal memos and tools, but in the public-facing models they train and sell. That includes the algorithms that shape search results, feed curation, ad targeting, and content moderation on platforms like YouTube, Meta, and X. It would be like telling every planet in the solar system that they have to abide by the rules set by SpaceX if they want to continue their relationship with the Sun

Essentially, these decisions determine what information and content – as well as content creators – you see or don’t see, and have a lasting effect by defining the parameters of acceptable discourse and content.

Trust us. Land on Mars is being divided up equally, and we would never think of short-changing you or depriving you of your share of precious resources that may exist on the planet. What’s that — you want proof? No one else is demanding proof. You don’t want to cause any trouble, do you?


Capturing the Platforms

Furthermore, the executive order specifies that AI systems must be “ideologically neutral” which … isn’t a thing. Ideology, by definition, is not neutral. 

Opposing ideologies are simply different viewpoints on how to run a state. It’s not bad to have an ideology, and neither left nor right is inherently good or bad, nor correct or incorrect all the time. The fact is that if an ideology exists at all, then there is no neutrality. 

Furthermore, there are many instances where neutrality is not the desired state. When there is a clear right and wrong or an indisputable true and false position, we don’t want artificial intelligence systems that can’t or won’t tell us which position is correct or incorrect. We don’t want artificial intelligence systems that are so afraid of appearing ideologically biased that they refuse to tell us when something is true or false. At least, those of us who don’t want to live in an authoritarian country or get stranded on another planet don’t want systems like that.

EXCLUSIVE

The Turning Point Takeover: The Growing Network Bringing Trump’s Politics to the UK

The Women’s Safety Initiative is getting more attention, but its ideological underpinnings deserve serious scrutiny, reports Katherine Denkinson

As all of this was going on, YouTube was caught quietly relaxing its moderation guidelines, according to leaks verified by independent tech reporters. Internal training documents reportedly instructed moderators to allow more “public interest” content on sensitive topics—such as vaccine scepticism, climate change denial, election fraud claims, trans rights, and abortion rights – even if it contained potential misinformation

While YouTube publicly framed the move as a balance between safety and speech, the timing aligned closely with intensifying political pressure from Trump allies, who have repeatedly accused platforms of suppressing conservative viewpoints. 

The sudden change also represented a pretty significant shift in company policy, at least at that point in time. The result is an environment where not just controversial, but harmful content now flows more freely and reaches more people, while platforms get to shield themselves behind a veneer of neutrality, even as they host the content that others are getting in trouble for. 

Oh, those toxic chemicals? Of course, we didn’t bring them to Mars with us. Some of the settlers must’ve smuggled them in. We’ll search them to root out the troublemakers

Back in the earlier days of content moderation, a similar incident happened when conservatives complained so much about Facebook supposedly censoring them that Facebook ended up actually amplifying conservative voices and right-wing news networks, despite the fact that they were never actually censoring them in the first place.

By recasting content moderation as elitist suppression and partisan censorship, and positioning AI realignment as a defence of “American values,” Trump’s team has flipped the script.


Stealth Censorship

The brilliance—and danger—of reverse algorithmic capture lies in its stealth. Traditional regulatory capture occurs when corporations co-opt public agencies to serve their interests. Here, it is the state applying strategic pressure to induce platforms to self-regulate in alignment with political ideology. 

There’s no need for a direct order to remove a video or suppress a post. Instead, platforms interpret policy signals and reprogram their systems accordingly. Content moderation thresholds are recalibrated. Ranking systems are tweaked. Engagement scores are adjusted to reward compliance.

The government never has to touch the code – but the algorithm still moves, and those in charge of programming it know what their assignment is. It’s like the pilot of a rocket ship being instructed not to follow the written protocol, but rather to steer the ship to wherever its billionaire owner wants to take it – but still reporting to Mission Control that all is normal.This process is not simply about content – it’s also about epistemology. When search results are reordered or restructured entirely, when dissident voices are algorithmically demoted, when “public trust” becomes a metric for visibility, the rules of knowledge distribution are rewritten. Algorithms, once claimed to be neutral tools, become the unseen curators of reality. The average user will not see what change – they will only know that some stories now feel louder, others strangely absent. One can’t know what one isn’t seeing.

Could you find your way around Mars in the dark without a map? That’s essentially what we’re being asked to do here

Similarly, because of the lack of transparency surrounding algorithms and their output, very few people will ever get to see exactly why they are being served certain content and not other types of content, and therefore, it will be nearly impossible to prove that any specific change is due to a policy pushed by the Trump administration.

Individuals are also unable to see why some of their posts got more traction than others, potentially placing them at an unfair disadvantage in terms of being able to conduct an evidence-based evaluation of the performance of their content and how various tweaks may influence engagement metrics.

In a shallow attempt to feign transparency by Musk and the team at X, the platform’s algorithm was posted online – or at least that’s what Musk claimed. In reality, the code posted online contained very little information that was not already known, but it allowed the billionaire to doubt his transparency as the head of X.

UPDATE

Keir Starmer’s Government Still Hasn’t Reviewed Use of Musk’s X Despite Platform ‘Amplifying Hate Speech and Misinformation’

Downing Street continues to refuse to explore communicating through alternative platforms, like Bluesky, despite the role of X in last summer’s violent disorder


Technopolitics

Crucially, this distance from the dirty work also enables powerful actors to maintain plausible deniability. The administration can say — truthfully, at least in the most literal sense, though not so much when we actually break down what we mean — hasn’t censored anyone. The platforms can claim they’ve made internal adjustments based on evolving policy needs or commercial incentives. Even Elon Musk can say he didn’t truly have a role in it because after all, he was never an official employee of the US government.

But the outcome is the same: a digital public sphere where ideology is upstream from visibility, and truth is filtered through a politicised machine. And with no transparency, there is no opportunity for accountability — nor many incentives for improvement.

We’re already seeing the effects of this. TikTok has faced increasing scrutiny for its handling of political content, but under the current climate, enforcement has softened for some narratives while intensified for others. 

Truth Social, Trump’s own platform, has quietly enforced shadow moderation practices that disproportionately restrict anti-Trump post – despite branding itself as a free speech alternative. Meanwhile, AI companies adapting to federal compliance language are tweaking their foundational models to avoid generating answers deemed “biased”—a standard increasingly defined by political loyalty rather than epistemic integrity.

Even X, which touts itself as the ultimate free speech platform, has caved to authoritarian leaders like Narendra Modi, removing critical posts about him and censoring his critics ahead of India’s elections.

The consequences of this new phenomenon – reverse algorithmic capture – are profound. In the name of fighting censorship, the Trump administration is shaping digital ecosystems to favour its preferred narratives while chilling dissent. This is not the overt authoritarianism of content bans (though that happens here, too); it is a form of soft epistemic authoritarianism – where the gates remain open, but the path is paved for those who align, and obscured for those who do not.

If you want to make it back from Mars, you’ll need to play by the rules

This strategy is especially potent because it rebrands algorithmic power as populist reform. By recasting content moderation as elitist suppression and positioning AI realignment as a defence of “American values,” Trump’s team has flipped the script. 

Platforms that resist may face regulatory scrutiny or contract exclusion. Those that comply enjoy market access and political cover. In essence, we are witnessing the emergence of a privatised information infrastructure that serves public ideological aims without invoking the First Amendment – or admitting the state’s hand.

We must be clear about what this is. Reverse algorithmic capture is not a policy debate about platform bias. It is a strategy designed and rolled out by the Trump administration to realign the rules of information visibility in favour of those in power, using regulatory levers to reprogram epistemic systems. If the last decade was about identifying how platforms moderate speech, the next will be about exposing how governments reshape the systems that do the moderating.

This strategy is difficult to detect and harder to challenge. But it can be resisted – through transparency mandates, third-party audits of algorithmic impact, whistleblower protections, and legal frameworks that treat visibility filtering as a public interest issue. And most of all, informed citizens who read articles like this one.

Because when governments can shape what you see without saying what they’ve done, democracy becomes a puppet show lit by a carefully engineered spotlight.

And if the algorithm is the gatekeeper now, then who holds the keys?

This is an edited version of the piece first published on Weaponized Spaces


Written by

This article was filed under
, , , , , , ,