Lack of Transparency over Police Forces’ Covert Use of Predictive Policing Software Raises Concerns about Human Rights Abuses
UK police forces are under scrutiny for their lack of transparency regarding the use of harmful technologies known to exacerbate racist policing
Newsletter offer
Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
Police in the UK are routinely refusing to allow the public insight into the methods used in policing, citing exemptions on the grounds of national security and law enforcement for companies implicated in scandals or potential breaches of human rights.
Research conducted by The Citizens found that Freedom of Information (FOI) requests regarding multiple tech firms were met with almost identical responses by police forces, which could ‘neither confirm nor deny’ their use of covert software to monitor citizens’ behaviour.
They cited the threat from terrorism and the potential in revealing the software used to “cause operational harm”. However, there is plenty of documented evidence that these tools are often discriminatory and impact citizens’ right to protest.
Currently, through the use of blanket exemption clauses – and without any clear legislative oversight – public access to information on systems that may be being used to surveil them remains opaque.
Companies including Palantir, NSO Group, QuaDream, Dark Matter and Gamma Group are all exempt from disclosure under the precedent set by the police, along with another entity, Dataminr.
Dataminr is a social media monitoring tool used in newsrooms, corporate settings and by police, with The Citizens revealing that, since 2017, the UK Government has spent more than £5.1 million on its technology. This has helped police in the US monitor and break up Black Lives Matter and Muslim rights activism through social media monitoring. Dataminr software has also been used by the Ministry of Defence, Foreign Commonwealth and Development Office, and the Cabinet Office, with the latest contract being revealed in February.
FOI requests for more information as to the nature of Dataminr’s contracts with the Cabinet Office have been refused and appeals to the Information Commissioner’s Office not upheld.
Aside from Dataminr’s known government contracts, the company has links to public officials and has been spoken about favourably by government departments. The firm’s success in monitoring public posts referencing the Coronavirus, for instance, has been noted by the Centre for Data Ethics and Innovation and at least three officials and parliamentarians have had dealings with it.
Dominic Fortescue, former government chief security officer, held an advisory position with the company between May and June 2022; while in 2020 former Metropolitan Police Commissioner Lord Bernard Hogan-Howe was paid to deliver an online video conference for the company in relation to its work for the Ministry of Defence.
Lord Nigel Darroch also held a consultancy position with the firm as late as 2022, with Dataminr featured among others at the ‘future policing’ zone of last year’s emergency services event, chaired by Lord Hogan-Howe.
Fortescue, Lord Hogan-Howe and Lord Darroch are not accused of racism or of condoning racist policing.
Monitoring social media to proactively collect data about future protests is a form of predictive policing. In 2016, the American Civil Liberties Union (ACLU) reported that US law enforcement was deploying the tool to surveil and prevent BLM protestors from gathering.
Dataminr has since been embroiled in scandals that have resulted in limitations of its use abroad. New research shows that, far from being a ‘neutral’ observational tool, Dataminr produces results that reflect its clients’ politics, business goals and ways of operating.
Cambridge University’s Dr Eleanor Drage and Dr Federica Frabetti of Roehampton University argue that protests that politically oppose the government are the ones that protest recognition tech is most likely to flag as potential threats – because it only shows clients (in this case law enforcement) protests that it is already concerned with.
In their analysis of a Dataminr patent, they have shown that the technology identifies ‘dangerous’ protests that align with clients’ own worldview. Dataminr’s surveillance mechanisms are therefore not general but proactively targeted at the protests that clients are preoccupied with. As the academics explain, if a Dataminr client is worried about BLM agitation, the system is far more likely to show up early signs of BLM mobilisations as ‘dangerous’ protests about to happen.
Following the murder of George Floyd in May 2020, Dataminr partnered with New York and Minneapolis police departments, alerting them to BLM protests by monitoring social media data and ‘tracking’ this data while clients “listened”. These departments requested specific information about BLM protests and Dataminr software gathered pictures and status updates that protestors and bystanders were posting on social media platforms, packaging this information into alerts sent directly to law enforcement.
In 2016, hashtags and keywords such as #muslimlivesmatter and #BLM became ‘trigger’ words for US police. Dataminr combined these social media tracking tools with technologies such as natural language processing, audio processing and machine vision – which help identify who or what is in an image, including text extraction from photos. This combination of tools, it claims, gives a robust impression of how likely it is that a crowd of people will become a dangerous protest.
According to a Dataminr promotional blog, its software can identify a fire from social media posts and other sources. When the software flags images that contain elements that denote fire (such as smoke) in combination with the presence of hashtags and keywords like “fire” and “burning”, the result is cross-checked and can be sent out as an alert.
While this process is relatively apolitical when applied to fires, teaching the software to associate certain kinds of images, text and hashtags with a ‘dangerous’ protest results in politically and racially-biased definitions of what dangerous protests look like. This is because, to make these predictions, the system has to decide whether the event resembles other previous events that were labelled ‘dangerous’ – for example, past BLM protests.
Because Dataminr has been extensively used to crackdown on BLM and other racially-informed protests, the system has been trained to make a connection between crowds of predominantly black and brown people and a potentially violent protest. Events labelled dangerous in the future need to correspond to events labelled as dangerous in the past to be validated as a correct prediction. As Frabetti and Drage have observed, Dataminr is therefore disproportionately likely to flag gatherings with comparable demographics as dangerous. And yet, Dataminr calls this a “prediction” – suggestive of Minority Report-style policing whereby it is possible to see and prevent crime before it happens.
Events surrounding Dataminr’s partial ban in the US have shown the difficulties of limiting law enforcement’s use of these technologies.
When in 2016 the ACLU proved that Dataminr’s interventions were contributing to racist policing, the company was subsequently banned from granting fusion centres in the US direct access to Twitter’s API. Fusion centres are state-owned and operated facilities and serve as focal points to gather, analyse and redistribute intelligence among state, local, tribal and territorial (SLTT), federal and private sector partners to detect criminal and terrorist activity.
However, US law enforcement found a way around these limitations by continuing to receive Dataminr alerts outside of fusion centres.
There are currently no access limitations to how law enforcement and border control-related clients in other countries access Dataminr’s full services. In 2016 Dataminr worked with South African law enforcement, for instance, to monitor students at the #Shackville protests, which aimed to combat black students’ lack of access to decent university accommodation at the University of Cape Town.
Drage and Frabetti’s research also shows that this software is more likely to flag ‘left-wing’ protests and racial minority protestors as potentially dangerous, having been trained on BLM and the Shackville protests.
Since Dataminr is based on machine learning technologies, it ‘learns’ to identify a protest by scanning large quantities of data, making associations based on past events “according to law enforcement’s existing concerns and perceptions of what a protest is and what kind of people are likely to make it turn dangerous”. If it is trained on police data, Dataminr can therefore learn to associate gatherings of black people or BLM slogans with violent protests.
This process does not end with training. It continues while the technology is deployed by law enforcement, because, as Drage and Frabetti explain, “if the client feeds back to the system that certain alerts are or are not working for them, this in turn reinforces or adjusts the knowledge base and the way the system generates classification rules”.
Therefore the software reflects patterns of discrimination and further embeds them into police practice. Discriminatory algorithms that predict crime based on data about the past (for example, where crime is likely to be committed or how an individual or a group will behave) lead to the over-monitoring of certain areas, groups or individuals. They can also lead to life-changing decisions that, due to the lack of transparency and understanding of how these algorithms work, are extremely difficult to challenge.
The lack of transparency and oversight in how these technologies are used is deeply concerning, especially in the wake of the UK Government’s attempts via the Public Order Bill at curtailing protests such as Just Stop Oil’s ‘slow marching’ before disruption begins. The bill has been contested in the House of Lords and by campaigners, with Labour peer Shami Chakrabarti condemning it because “these powers are going to be used and abused by accident or design against people who may not even be protestors”.
Don’t miss a story
A 2021 inquiry by Parliament’s Justice and Home Affairs committee was “taken aback by the proliferation of artificial intelligence tools potentially being used without proper oversight, particularly by police forces”. The report found that use of algorithmic tools in crime prevention was “a new Wild West” and that “all 43 police forces are free to individually commission whatever tools they like” as part of an ‘opaque’ system in need of overhaul.
The Government rejected most of the report’s findings – notably the creation of an oversight body and a certification system for new technologies, stating that it was “not persuaded by the arguments put forward”. This was despite the report finding that these softwares could “broadly undermine human rights”.
Following the report’s publication, then Crime and Policing Minister Kit Malthouse said that legislating the police’s deployment of advanced algorithmic tools might “stifle innovation” and that the use of new technologies could be settled in court.
Griff Ferris, legal officer at criminal justice watchdog Fair Trials, said in response that “the Government should legislate to provide proper legal safeguards in their use, or in the case of certain technologies, prohibit their use entirely, and not rely on people challenging them in court”.
“Taking the government or police to court should be a last resort, not the sole way of holding them accountable for their actions, not least when litigation is extremely expensive and the majority of people will not be able to afford it,” he added.
While the threat from terrorism is very real, there remains a serious question of oversight in how advanced technologies are implemented, made worse by the lack of transparency. In addition to facial recognition, police have also previously been taken to court by campaign groups for refusing to confirm or deny its use of other covert tactics, such as international mobile subscriber identity software (IMSIs), used to intercept mobile data.
At least 14 police forces in the UK are also known to have used or considered predictive policing technologies, with campaigners arguing that “we cannot be sure that these programmes have been developed free of bias and that they will not disproportionately adversely impact on certain communities or demographics”. Use of these technologies have, in the past, not been subject to public consultation and, without basic scrutiny at either a public or legislative level, there remains no solid mechanism for independent oversight of their use by law enforcement.
Dataminr was approached for comment.
Additional reporting by Dr Eleanor Drage and Dr Federica Frabetti