Free from fear or favour
No tracking. No cookies

The Facebook Oversight Board is itself an Information Operation

Caroline Orr shows how the political havoc caused by Mark Zuckerberg – the most powerful non-state actor in the world – is baked into his business model

Photo montage: mundissima/Alamy Stock

The Facebook Oversight Board Is itself an Information Operation

Caroline Orr shows how the political havoc caused by Mark Zuckerberg – the most powerful non-state actor in the world – is baked into his business model

By now, the destructive effects of Facebook are well known. We’ve seen democracies destabilised, societies become even more polarised, extremist groups establish new strongholds, and vicious online hate spill over into real world violence and death. 

What we haven’t yet seen is accountability – and despite Facebook’s promises, its Oversight Board doesn’t appear to have the will or the power to deliver that. 

Last week, the Board upheld (sort of) Facebook’s decision to ban Donald Trump in the aftermath of the 6 January attack on the US Capitol, which was carried out by supporters of the former President who believed his false claims about the 2020 election being stolen from him.  

By merging mass surveillance, psychographic profiling, and micro-targeted advertising, Facebook has created a platform that not only enables profit-making from the theft and exploitation of personal data, but also allows your data to be used in ways that take your power away as a citizen and a voter.

Handing over the decision-making to a supposedly “independent” panel was meant to outsource blame for a controversial decision to the Board, which is currently made up of 20 people who were handpicked by Facebook executives. 

But even the Board designed by Facebook to take the fall for Facebook wasn’t willing to shoulder the blame for one of the biggest decisions in the company’s history. Although they upheld the ban for now, they told Facebook it was responsible for making the final call on Trump’s account. 

As Facebook debates what to do about Trump, it’s what they’re not talking about that we should be focused on: Facebook itself is designed for exploitation by people exactly like Trump.


InsurrectionAn Unprecedented but Inevitable Outcome

In the lead-up to the Capitol riot on 6 January 2021, Facebook – particularly the ‘Groups’ feature – served as a staging ground where many users discussed bringing weapons and carrying out acts of violence at the pro-Trump event. More than 100,000 users posted hashtags promoting the ‘Stop the Steal’ rally in the days before the insurrection, while other users posted public and private messages announcing their intention to go and, in some cases, to carry out violence.

Dozens of Republican Party groups from around the country used Facebook to help organise bus trips to DC for the rally and, in other groups, users encouraged each other to illegally bring guns to the event. 

As violence broke out at the Capitol, some Facebook users live-streamed the incident – footage that would later be used by law enforcement to track down participants.

In the aftermath of the insurrection, Facebook’s Chief Operating Officer Sheryl Sandberg tried to cast blame on other social media platforms, saying: “I think these events were largely organised on platforms that don’t have our abilities to stop hate, don’t have our standards and don’t have our transparency.”

But that’s not what the evidence shows. 

Facebook was the most widely-used platform for planning and coordinating ahead of the Capitol riot, according to charging documents filed by the Justice Department. Much of this activity was later used by law enforcement to identify participants in the violence, with nearly one-third of criminal charges that stemmed from the insurrection mentioning Facebook. 

Only after the insurrection did Facebook announce plans to remove ‘Stop the Steal’ content from its platform. But, by then, the damage was done. 


FacebookThe Global Impact

The consequences of Facebook’s failure to address disinformation and hate on its platform have been felt, in varying degrees, all over the world. The company’s negligence helped fuel the genocide of the Rohingya minority in Myanmar and has been blamed for inciting religious violence in India and Sri Lanka. There’s also evidence from Germany indicating that Facebook use is associated with attacks on refugees, and civil rights leaders in the US blame the platform for facilitating a variety of civil rights abuses including hate crimes and attacks on voting rights. 

Meanwhile, in Canada, the far-right Yellow Vest movement turned to Facebook as its main organising platform, often using it to issue threats and plan violent demonstrations alongside known hate groups, including several that had a presence at the 6 January riot. Facebook consistently failed to take action against these users, even after a leading intelligence group warned that the movement was becoming a national security threat. 

Facebook has also helped to facilitate the spread of conspiracy theories like QAnon, as well as white supremacist and terrorist propaganda, anti-vaccine content and other dangerous health misinformation, as well as criminal networks trafficking in everything from endangered wildlife to stolen credit cards.

That’s in addition to the platform’s role as a hub for foreign influence operations, including Russia’s 2016 attack on the US election, as well as the Cambridge Analytica scandal

Each time a new crisis arises at Facebook, the company rolls out a familiar playbook. First, they try to deny or downplay the existence of the problem. Eventually, they acknowledge that the problem is real, but try to deny wrongdoing and outsource blame to others. They have vehemently rejected the idea that they need to be regulated and consistently refuse attempts at truly independent oversight. And despite rolling out various new features hailed as “fixes,” disinformation and hate continue to thrive on the platform. 

That’s because these “fixes” are all aimed downstream, at the symptoms of the problem, but never at the problem itself. It’s not in Facebook’s interest to address the root cause, because that would mean dismantling the company’s entire business model.


Groups ‘A powerful networking and recruitment tool to terrorist and hate groups’

Facebook introduced its ‘Groups’ feature in 2010, opening up a private place to organise and communicate, but also to spread conspiracy theories and plan violent events – all with minimal oversight. 

Once a person joins one group, the algorithm is designed to recommend similar groups to join. 

While this can be a helpful feature for users searching for local neighbourhood groups or professional organisations, it has also proven to be an incredibly effective tool for extremists looking to recruit new members. According to Facebook’s own internal research, 64% of users in extremist groups joined based on a recommendation from Facebook.

In 2019, the Washington, DC-based National Whistleblower Center (NWC) warned that Facebook was “providing a powerful networking and recruitment tool to terrorist and hate groups” by offering tools that were being used “to identify and recruit supporters.” 

The same complaint also provided evidence that Facebook’s auto-generation function was inadvertently producing extremist propaganda, including over 30 auto-generated pages for white supremacist groups and at least one auto-generated page for al-Qaida in the Arabian Peninsula. 

“Facebook has no meaningful strategy for removing this terror and hate content,” NWC warned.

Subsequent investigations have found that Facebook not only continues to provide a platform to extremist content, but in many cases, it’s creating it

In August 2020, Facebook pledged to take action against militias and other “militarised social movements,” but a recent investigation by the Tech Transparency Project found more than 200 militia pages and groups that were still active on Facebook, 70% of which had the word “militia” in their name.

According to the Office of the Director of National Intelligence, militia groups are one of the “most lethal” domestic extremism threats in America today. 

Yet it was only this year, after the events of 6 January, that Facebook announced it would no longer algorithmically recommend political groups to users. 


Algorithms‘Exploit the human brain’s attraction to divisiveness’

Facebook had plenty of warning and more than enough opportunities to address the design features and enforcement gaps that allowed thousands of extremists to use the platform to plan a violent attack on the US Capitol out in the open. 

In a presentation slide showing the findings from a 2018 internal report, Facebook reported: “Our algorithms exploit the human brain’s attraction to divisiveness.”

“If left unchecked,” it cautioned, the algorithm would recommend “more and more divisive content in an effort to gain user attention and increase time on the platform.”

But instead of using these findings to improve the algorithm and mitigate the harmful effects, Mark Zuckerberg and other Facebook executives “shelved the research” and “weakened or blocked efforts to apply its conclusions to Facebook products.”

In other words, Facebook studied how its algorithms exploit and harm users – and then ignored the findings from their own research.

Although attempts have been made by outside groups to study the effects of policy changes and new content moderation rules, Facebook is notoriously secretive about its algorithm, so researchers still have no way to systematically evaluate how the company’s business decisions are affecting the information environment and, more broadly, how they affect democracy

This means that even when Facebook announces a “fix” to one of its many problems, there’s no reliable method to see if it’s working.

However, we do know what’s not working.

Facebook’s artificial intelligence system used for detecting prohibited content is severely flawed and can be evaded with simple manoeuvres like using slightly altered terminology or embedding words as graphics instead of searchable text. 

In the lead-up to the EU Parliament elections in May 2019, journalists and researchers uncovered a group of far-right influence networks in Spain that reached 1.7 million people on Facebook, and a short time later also revealed an astroturf campaign promoting a hard Brexit and advertisements posted by the Trump campaign that violated Facebook’s own rules. 

None of these were detected by Facebook’s own tools.

In the US, an investigation by the non-profit activist organisation Avaaz found that variations of misinformation that had already been marked false by Facebook were going undetected in the lead-up to the 2020 election. 

“Flaws in Facebook’s fact-checking system mean that even simple tweaks to misinformation content are enough to help it escape being labeled by the platform,” Avaaz reported in October 2020. “For example, if the original meme is marked as misinformation, it’s often sufficient to choose a different background image, slightly change the post, crop it or write it out, to circumvent Facebook’s detection systems.”

“As a result, many Facebook pages seem to be slipping under Facebook’s radar for being a ‘repeat offender’ and avoid being down ranked in users’ News Feeds. This leaves them free to go viral ahead of the US Elections in November,” the report warned.

But relying on humans to moderate the platform isn’t tenable, either, at least the way it’s currently being done. 

Content moderators at Facebook have spoken out about the trauma resulting from their jobs, and the company recently settled a $52 million lawsuit filed by third-party contractors who said their mental health suffered due to working conditions at Facebook. The company also reportedly failed to provide adequate protections and support for content moderators, who are asked to monitor and remove some of the most disturbing content on the internet.

On top of these issues, Facebook also has a well-documented pattern of arbitrary enforcement that allows misinformation, hate speech, and even death threats to stay on the platform. This stems at least in part from the involvement of executives like Mark Zuckerberg and Joel Kaplan in content moderation decisions. On numerous occasions, members of Facebook’s leadership team have reportedly personally intervened to allow content from right-wing figures and disinformation purveyors like Alex Jones to stay on the platform.

In 2015, after Trump used Facebook to call for a “total and complete shutdown” of Muslims entering the US, Kaplan – the head of Facebook’s policy team – reportedly advised the company not to take action against Trump’s account because of the risk of backlash from conservative users. 

“Don’t poke the bear,” Kaplan said, according to the New York Times.


News Feed‘Optimises for engagement… bullshit is highly engaging’

At the heart of these problems is that Facebook’s business model – the very foundation upon which the platform exists – poses an existential threat to democracy.

Facebook now serves as the internet’s information gatekeeper, with more than a third of Americans using the social media platform as a primary source of news. In other parts of the world, Facebook’s influence is even greater: in some countries, Facebook quite literally is the internet

The platform has had a devastating impact on journalism and the publishing industry alike, in part because of its dominance in the advertising industry, but also because of its impact on the information ecosystem more broadly. 

Facebook prioritises engagement – clicks, views, sharing, time spent on the platform – not quality or accuracy. During the 2016 election, “fake news” outperformed real news on Facebook, in large part due to the platform’s recommendation algorithm, which favours sensational, hyper-partisan content and produces news feeds that look vastly different depending on one’s political affiliation.

“The News Feed optimises for engagement, [and] bullshit is highly engaging,” one former product manager wrote.

And therein the problem lies. 

The Oversight Board’s recommendation that Facebook “undertake a comprehensive review” of the platform’s “potential contribution” to the insurrection is a reasonable one, but it sidesteps the bigger problem. While Facebook clearly played a role in the unprecedented events of 6 January, the threat it poses to democracy is not limited to one event, one person, or even one country. 

Facebook’s entire existence is built off a model that depends on you providing it with data, which it then sells to advertisers and uses to refine its algorithm to target you with the content that is most likely to capture your attention – content that is emotionally-laden, hyper-partisan, and very often false or misleading. 

Micro-targeting allows Facebook to split up the electorate into interest groups and feed each one different information, effectively creating different realities for different parts of the population, making it nearly impossible to scrutinise, and wiping out the foundation for rational discourse that is needed for a healthy democracy. The same tools give hate groups a new way to recruit and connect, and enable people to target content to anti-Semites, militia members, and white supremacists

By merging mass surveillance, psychographic profiling, and micro-targeted advertising, Facebook has created a platform that not only enables profit-making from the theft and exploitation of personal data, but also allows your data to be used in ways that take your power away as a citizen and a voter, and gives that power to corporations and political campaigns. The consequences of this business model will continue to mount as long as accountability lags. 

From the genocide in Myanmar to Brexit in the UK to the 2016 presidential election and the Capitol insurrection, Facebook’s corrosive impact on society is wreaking havoc around the globe. These are not random or isolated incidents. These are the inevitable results of a problem that Facebook created and has proven itself incapable of solving. 

Regardless of what it decides to do about Trump’s account, Facebook will remain a powerful tool for authoritarians, profit-hungry corporations, and bad actors to use to their advantage, at the expense of us all. Facebook knows this, and it also knows how to manipulate the news cycle to make us talk about anything other than that. 

In this way, Facebook is waging its own information war, and the Oversight Board appears to be its latest weapon. 


Written by

This article was filed under
, ,