What does the Met Police’s use of live facial recognition mean for our privacy and freedom of expression?
The Metropolitan Police are going ahead with the systematic use of a mass surveillance tool.
Live facial recognition is used by authoritarian regimes to surveil the public – how has it been fast tracked on to the streets of London without a strict legal framework in place and will it actually help fight crime?
Live facial recognition has been welcomed by the Met Police’s Assistant Commissioner, Nick Ephgrave, who sees it as vital in bearing down on violence. The biometric technology is used to generate matches with people on a watchlist – effectively an ongoing police line-up – with people in the camera’s radius.
However, independent research found that it was 81% inaccurate when trialled in the capital. Of 42 matches, only eight were verified as correct.
“Watchlists cannot only be made up of people who haven’t done anything wrong, but there have often been watchlists of people who are not actually wanted by the police for any crimes,” Griff Ferris, a member of the campaign group Big Brother Watch, told Byline Times.
“There are very few rules about who can be put on these lists. Previously we’ve seen police use it at a demonstration at an arms fair in Cardiff in 2018 where people on the watchlists were suspected to be activists. We’ve seen it being used at Remembrance Sunday 2017 where the Met Police used an entire watchlist of people who weren’t wanted in connection with any crimes and to all of whom were considered to have mental health problems.”
A 2019 report on public attitudes towards facial recognition technology found that the public fear the normalisation of surveillance, but that the majority support its use when there is demonstrable public benefit.
However, there are concerns that the use of biometric technology overrides the principle of policing by consent as, by its very nature, individuals will not explicitly be asked for their permission for their images to be matched. The Met have said that areas where live facial recognition is used will be signposted and that people will be informed online about where it will be deployed – but there is no option for people to opt out if they are going to such an area.
A 2018 study by Big Brother Watch found that 2,451 innocent people’s biometric photos were taken and stored without their knowledge by South Wales Police. One man took the South Wales force to court on the grounds that the tool breached his right to privacy. The court found that South Wales Police met the requirements of the Human Rights Act, that they had complied with equality laws and had used personal data in a lawful manner.
This case has emboldened the Met’s legal standing. There is no law relating specifically to facial recognition technology, but are a cluster of laws that relate to different parts of the whole process.
A 2019 report by the Information Commissioner’s Office on the police’s use of the biometric tool found that the “combination of law and practice can be made more clear” and that “a statutory and binding code of practice, issued by government, should seek to address the specific issues arising from police use of live facial recognition”.
The first use of live facial recognition since the trials was at Stratford Shopping Centre on 11 February.
For Liberty, an advocacy group aiming to protect people’s civil liberties in the UK, “this is a dangerous, oppressive and completely unjustified move by the Met”.
“Facial recognition technology gives the state unprecedented power to track and monitor any one of us, destroying our privacy and our free expression,” a spokesman said. “Rolling out a mass surveillance tool that has been rejected by democracies and embraced by oppressive regimes is a dangerous and sinister step. It pushes us towards a surveillance state in which our freedom to live our lives free from state interference no longer exists.”