Free from fear or favour
No tracking. No cookies

Law-Makers Must Wake Up to the Threat of Artificial Intelligence

The European Union’s Artificial Intelligence Act will allow the creeping increased use of AI by law enforcement agencies to continue, reports Catherine Connolly

Photo: Wit Olszewsk/Alamy

Law-Makers Must Wake Up to the Threat of Artificial Intelligence

The European Union’s Artificial Intelligence Act will allow the creeping increased use of AI by law enforcement agencies to continue, reports Catherine Connolly

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

The European Union’s proposed Artificial Intelligence Act promises to be the world’s first legal framework designed to regulate the use of artificial intelligence. However, it falls short on several levels.

Firstly, it does not apply to military uses of AI. And while it does classify AI systems in a range of other areas as ‘high risk’, including certain biometric identification systems and law enforcement systems, it contains a number of exemptions that will still allow it to be used for those purposes.

In fact even some of the practices it deems ‘unacceptable’ will be allowed in specified circumstances, such as for ‘the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack’.

As a result, deliberate loopholes for the use of ‘high risk’ and even ‘unacceptable’ systems will in future exist for law enforcement.

Alarmingly, as both European Digital Rights and the European Center for Not-for-Profit Law have pointed out, there also appears to be a trend towards creating further exceptions for the use of such AI systems for ‘national security’ and ‘public security’ purposes, with both France and Slovakia recently proposing amendments that would create exemptions in these areas.

Similarly, article 22 of the EU’s General Data Protection Regulation (GDPR) affords individuals ‘the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’, exceptions also exist here for reasons of national security, defence, and public security.

This is all the more worrying when considering that, AI systems developed or used exclusively for military purposes are already exempt from the AI Act, and that, if the recent French and Slovakian amendments are accepted, the AI Act would not apply to such systems when used for national or public security purposes.

This means that the AI systems used in autonomous weapon systems, aka killer robots – weapons that select and engage targets on the basis of data collected by sensors, would not fall under the purview of the AI Act. This is despite the fact that such weapon systems are ‘high-risk’ under any common understanding of the term.

In the context of armed conflict, the International Committee of the Red Cross has called for states to adopt new rules on autonomous weapons, including banning such systems where they would target people.

While international human rights law applies to all uses of force by law enforcement and other state security forces, the fact that AI systems used in autonomous weapons could go unregulated in the event of their use in such circumstances is not only irresponsible but dangerous.

It is naïve to think that weapon systems with autonomous functions won’t eventually be considered for use by domestic police forces using the ‘public security’ exemption, or used in the context of migration and border control under a ‘national security’ exemption.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

For example, as a recent report from Statewatch reveals, since 2007 the EU has spent over €341 million in public funding on new technologies for immigration and border control that involve artificial intelligence, including autonomous border control robots.

The majority of that funding has gone to private companies, including at least one company that is actively developing increasingly autonomous weapon systems. At the same time, various EU bodies and agencies, including the European Defence Agency, are funding research projects focused on military uses of AI. 

Though autonomous weapon systems aren’t addressed in the Artificial Intelligence Act, they have been addressed elsewhere by the EU. In the recent final report from the European Parliament’s Special Committee on Artificial Intelligence in a Digital Age, the Committee called on the European Council ‘to adopt a joint position on autonomous weapons systems that ensures meaningful human control over their critical functions’.

The report also stated that ‘humans should be kept in control of the decision to deploy and use weapons and remain accountable for the use of lethal force and for decisions over life and death’, and said that ‘machines cannot make human-like decisions involving the legal principles of distinction, proportionality and precaution’.

In resolutions passed in 2021 and in 2018, the European Parliament insisted on the need for a ban on killer robots. These interventions, while welcome, put autonomous weapon systems firmly in an armed conflict framework, and do not fully consider the possibility of their use for law enforcement or public security.

Top Gun MaverickAerial Warfare, Unmanned

As fiction, Tom Cruise’s sequel to his 80s blockbuster longs for the days of the single warrior in combat, when air-launched explosive violence is all about ground attacks often with civilian casualties

Given the EU’s focus on the military development of autonomous weapon systems, it would be easy to assume that states are well on their way to agreeing on new international rules for the development, deployment, and use of autonomous weapon systems in war. Unfortunately, this is not the case. International humanitarian law is the main body of law that regulates the conduct of armed conflict.

Today, new IHL rules on the means and methods of warfare are primarily made by states acting either within the UN treaty-making system – such as through the UN Convention on Certain Conventional Weapons – or through external treaty-making processes, such as the Convention on Cluster Munitions and the Anti-Personnel Mine Ban Convention.

For nine years, states have been meeting to discuss autonomous weapon systems within the framework of the Convention on Certain Conventional Weapons at the UN in Geneva, but progress has been continuously blocked by a small number of highly militarised states. And while numerous EU member states participate actively in these UN discussions, many of these countries have still not committed to supporting the negotiation of new legally binding international rules.

The recent deployment of weapon systems with autonomous functions and target recognition capabilities (Russia’s use of the KUB-BLA drone in Ukraine, for example) and uses of biometric identification systems (such as Ukraine’s use of facial recognition technology supplied by Clearview AI) highlight not only the growing prevalence of these technologies but also the urgency of effectively addressing their development and use.

The boundaries between technologies of war and technologies of state and police power have always been porous and the AI Act fails to recognise this. European law-makers need to take note.

Dr Catherine Connolly is the automated decision research manager for the Stop Killer Robots campaign


Written by

This article was filed under
, , , , ,