Free from fear or favour
No tracking. No cookies

The Matrix of Violence: Automated Racism in Police Surveillance

After the ‘A’ Level exams fiasco, Zeeshan Ali reveals how a reliance on technology and AI is reinforcing prejudice on the streets Recent weeks have witnessed a growing outcry against the state’s use of algorithms to predict the exam results of students across the country at the detriment of those predominantly from disadvantaged backgrounds. Teenagers…

A Black Lives Matter protest in London in June 2020. Photo: Ollie Millington/Rmv/Zuma Press/PA Images

The Matrix of ViolenceAutomated Racism in Police Surveillance

After the ‘A’ Level exams fiasco, Zeeshan Ali reveals how a reliance on technology and AI is reinforcing prejudice on the streets

Share this article

Recent weeks have witnessed a growing outcry against the state’s use of algorithms to predict the exam results of students across the country at the detriment of those predominantly from disadvantaged backgrounds.

Teenagers who were predicted A* and A grades found themselves receiving Bs and Cs without any evidence justifying the downgrades and their teachers being left just as perplexed. Patterns, however, quickly began to emerge.

The keen-eyed noted that students from private schools who lived in affluent areas were predicted better grades by the algorithm compared to counterparts from disadvantaged backgrounds. It soon became apparent that, within the operation of the algorithm, bias had inadvertently seeped in.

Following widespread anger, the Government sought to abandon the use of the particular algorithm (though not entirely), and instead rely on the judgement of teachers and schools to award grades.

Whilst it is tempting to dismiss this scandal as a one-off, the unethical use of automated processes by the state is quickly becoming the new norm.

Sectors such as healthcare and hospitality are undergoing a dramatic transition to reap the benefits of algorithms and artificial intelligence (AI), with efficiency rates improving in orders of magnitudes. Other sectors, however, are adopting algorithms at the detriment of the public, and in particular minority communities – such as police forces’ implementation of AI-based systems without proper consideration of the outcomes.


Gangs Violence Matrix

One example of the police’s questionable use of AI-based systems is the “gangs violence matrix” (GVM), which was introduced soon after the London Riots in 2011. The Metropolitan Police describes the GVM as an “intelligence tool to identify and risk-assess gang members across London who are involved in gang violence”.

The AI-system works by maintaining a database of names of people who have previously come into contact with police and partner agencies, and determining the level of ‘threat’ they pose by their network of friends and acquaintances. The flawed nature of GVM is the consequence of the ‘guilt-by-association’ nature of the system, and a biased data-set being introduced by an institution shown to be systemically racist.

In one telling incident, Bill, a young black man, was repeatedly stopped by the police without ever having committed any crime or done anything to arouse reasonable suspicion. Bill’s initial interaction with the police was at the age of 11 when he was stopped and searched. When he asked why he was being searched, one of the police officers replied: “Because I want to”. The police’s interest in him only grew over the years, as the GVM highlighted that Bill’s network was worthy of such interest. By the time he was 14, Bill was being arrested sometimes more than once a week – again without ever being charged.

Automating state processes in contexts demonstrated to be structurally racist risks further cementing structural bias.

The systemic racism of the Metropolitan Police resulted in Bill being cast into the eyes of the police, but it was the algorithm that reinforced the idea that he was a threat. In essence, the data-set provided to the matrix disproportionately represents individuals from vulnerable communities and the network-based threat modelling further justifies the increased securitisation of particular communities.

Bill’s sole ‘crime’ was to be living in a poor area, knowing people who were engaged in criminal behaviour, and being a victim of an unethical stop and search.

In other incidents, people who had shared videos of grime or drill music were deemed to be expressing ‘gang-affiliation’ thus warranting their addition to the GVM and being considered a ‘threat’.


Problematic Surveillance

The Metropolitan Police is also increasingly normalising the use of facial recognition technology which aims to be “intelligence-led” and deployed to “specific locations”. The danger for minority communities is stark.

The force states that it is “using this technology to prevent and detect crime by helping officers find wanted criminals”. It also states that the system is robust and that errors occur once in every one thousand cases. However, an independent review (commissioned, and later dismissed, by Scotland Yard) stated that the rate of false positives was likely to be four in every five cases.

The review also added that, whilst the Metropolitan Police states that the technology would only be used to identify individuals that were “wanted” (in itself an ambiguous term), the data that was being utilised by the technology was at times significantly out of date. As such, individuals that had already been dealt with by the courts, and who were not wanted for offences outlined in the data-set, were being stopped by the police.

The review added that, although the police claim that only particular locations would be affected and that the public would be clearly notified, the burden of avoiding the technology was significant. In some cases, to avoid particular locations, individuals were expected to take a nearly 20-minute detour. In other cases, posters notifying the public of the use of the technology put individuals in the range of the cameras themselves.

Perhaps most damningly, the authors of the review concluded that it was “highly possible” that the introduction of technology “would be held unlawful if challenged before the courts”.

The introduction of the technology should also not be considered an inevitable consequence of modern society. Major cities across the world have already acknowledged its unethical nature and have either banned it or halted trials (including in San Diego and San Francisco).

The development of AI and its incorporation within state functions promises significant benefits, reducing the workload on a stretched workforce. However, this technology should not be seen as a magic wand able to fix all problems without fault. Rather, automating state processes in contexts demonstrated to be structurally racist risks further cementing structural bias on a plane not easily distinguishable by human oversight.

The state should first and foremost address the institutional racism that has marred the function of key public institutions before utilising information from these respective institutions to try and teach an algorithm.

Zeeshan Ali is a media and policy Analyst for MEND (Muslim Engagement and Development)



Written by

This article was filed under
, , , , , , ,