Newsletter offer
Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
Rights groups across the world have warned of serious human rights issues with facial recognition tech, in a joint letter to legislators. Signatories include Amnesty, Human Rights Watch, Access Now, and European Digital Rights.
The demand for a ban on facial recognition tech follows a UK Government push for greater facial recognition surveillance in policing and supermarkets, following a rise in thefts.
Leading human rights, technology and equality organisations and experts across the world have called for an urgent stop to governments and private companies using facial recognition surveillance.
In a joint statement, the expert voices warn of serious concerns about the human rights and discriminatory impacts of facial recognition surveillance – as well as an insufficient evidence base, safeguards, legal bases and democratic mandates to justify the use of the controversial technology.
The international action, taken by 120 civil society organisations working across six continents plus over 60 experts, comes at a time when governments around the world are considering whether to prohibit or permit the use of live facial recognition.
Don’t miss a story
While the European Parliament has endorsed a blanket ban on police using AI-powered facial recognition surveillance under the AI Act and several US cities have banned the technology, the UK’s approach has been described as an “outlier”. In the UK, uses of live facial recognition surveillance have recently increased in the retail sector and some police forces.
Live facial recognition surveillance, where individuals’ faces are biometrically scanned by cameras in real-time and compared against a database, has been used in recent months at the Coronation of King Charles II, sports events, concerts and central London.
Research by Big Brother Watch, one of the groups that co-ordinated the international statement, found that over 89% of UK police facial recognition alerts to date have wrongly identified members of the public as people of interest.
International research, and the Metropolitan Police’s own testing of its facial recognition algorithm, have identified disproportionately higher inaccuracy rates when attempting to identify people of colour and women, which the force has attempted to mitigate by adjusting its algorithm’s settings, Big Brother Watch claims.
The UK’s Information Commissioner recently found that facial recognition firm Facewatch, whose software is used by retailers across the UK including Southern Co-op supermarkets, had breached a string of privacy rules including the requirement that data is processed lawfully, fairly and transparently, and the data rights of children.
However, the ICO did not publish this information until it was obtained via the Freedom of Information Act and did not penalise the company. A recent investigation found that the Policing Minister had threatened to write a public letter to the Commissioner during its probe into Facewatch, unless the outcome was “favourable” to the company.
A Home Office spokesperson told the Guardian earlier this month: “As the documents show, the minister made it clear that he was not seeking to influence any ICO investigation but to inform them of the government’s views about the seriousness of retail crime and abuse of staff.
“The government has made no secret of its support for the appropriate use of technologies like facial recognition, which can help businesses protect their customers, staff and stock by actively managing shoplifting and crime.”
But Silkie Carlo, director of Big Brother Watch, said a “huge chorus” of international experts were raising the alarm about “intrusive, AI-powered facial recognition surveillance.”
“It is vital that the British government sits up and listens. This dangerously authoritarian technology has the potential to turn populations into walking ID cards and every democracy ought to ban it,” Carlo said in a statement.
She added: “As hosts of the [global] AI summit in autumn, the UK should show leadership in adopting new technologies in a way that has material benefits for the public and our rights, rather than a way that mirrors the dystopian surveillance practices of Saudi Arabia and China. Live facial recognition surveillance has been an expensive failure, with significant costs to the public purse and our civil liberties at a time when both need far more careful protection.”
Ella Jakubowska, Senior Policy Advisor at European Digital Rights (EDRi) added that the EU’s upcoming Artificial Intelligence Act gives the European Union the chance to become a “world leader” in protecting people from public facial recognition and other biometric surveillance. “European Parliamentarians have spoken loud and clear in support of strong bans,” she said.
However, some EU governments are continuing to push back, “citing vague claims of ‘safety’ and ‘security’ without providing any objective evidence” Jakubowska said. “They want an unlimited margin of discretion to subject our faces, our bodies and our communities to these dystopian uses of technology, despite a complete lack of democratic mandate.”
And Anna Bacciarelli, Associate Tech Director at Human Rights Watch branded facial recognition surveillance a “huge risk to human rights everywhere.”
“There is consensus among human rights experts around the globe that the only solution is to urgently ban facial recognition surveillance: it’s imperative that governments and companies act on this to safeguard human rights now and in the future,” Bacciarelli said.
Big Brother Watch is leading the UK campaign to stop live facial recognition surveillance.
Statement in Full: Stop Facial Recognition Surveillance Now
We have a range of concerns about facial recognition surveillance, ranging from serious concerns about its incompatibility with human rights, to the potential for discriminatory impact, the lack of safeguards, the lack of an evidence base, an unproven case of necessity or proportionality, the lack of a sufficient legal basis, the lack of legislative oversight, and the lack of a democratic mandate.
All of these views lead us to the same conclusion: 180 experts call on police, other state authorities and private companies to immediately stop using facial recognition for the surveillance of publicly-accessible spaces and for the surveillance of people in migration or asylum contexts.
The signatories to this call are civil society organisations and individual experts (including researchers, academics and advisors in technology, privacy, data protection and human rights, lawyers and other professionals). This letter remains open for additional individual and organisation signatures. To add your signature, go to: https://tinyurl.com/stop-facial-rec-signature
Do you have a story that needs highlighting?
Get in touch by emailing josiah@bylinetimes.com