What are the consequences of handing over life and death decisions to a machine in combat, even with tokenistic human oversight?
The combat vehicle’s intelligent robotic machine gun autonomously jolts into action, pirouettes a mechanical ballet twist and aims the barrel at the desperately scared humans hunkering behind the collapsed wall in the dusty ruined town. The machine ‘thinks’ for a few seconds, then decides to open fire. It computes but does not care about the screaming that ensues. The days of clumsy Dalek nightmares and Japanese Gakutensoku are over.
The above is fiction but coming sooner than you might imagine to a war zone, civil disturbance or protest near you. Robotics are changing, rapidly, but important recent developments rarely make mainstream news.
The ANYmal robot resembles a mean metal dog pumped up on steroids, legs ready to spring and cyclops arm itching to play. ANYmal can run fast and recover from falling over, yet its real breakthrough is bridging the simulation–reality gap: whereby well-designed models and simulations fail to materialise into graceful responsive movements in dynamic natural environments.
ANYmal crosses this bridge by utilising artificial intelligence (AI) in a series of neural networks that generate and process data which the robot can independently respond and learn from; autonomously and by itself.
The consequences of robots self-learning and making independent life and death decisions in combat are profound, and worthy of intense scrutiny and debate.
In January 2019, researchers explained how this is achieved by merging predictive mathematical models with machine learning. Initially, a classical mathematical model is built which is then fed real world data that is processed through a series of neural networks operating at hundreds of thousands of steps per second, essentially creating a robotic learning process. This hybrid data learning is called “end-to-end training”.
The difference between such hybrid methods and classical robotics is loosely analogous to the difference between programming a toddler with instructions on how to walk (classical) and letting the toddler organically learn through its own lived experience (hybrid). Similar work is occurring in a robot called SpotMini by BostonDynamics. Such robots have numerous applications, from hazardous industrial inspections to search and rescue.
But, robots learning for themselves in an organic way can have darker and deadlier applications.
In the US, the intent is to use such technology for unmanned autonomous target acquisition technology, specifically for a project called Atlas (Advanced Targeting and Lethality Automated System). Atlas will have vision in numerous wavebands and will autonomously label and identify images (including humans) detected in combat scenes, termed “ground truthing”. Once labelling is complete, algorithms will infer a series of “facts” and logical consequences in a process called semantic reasoning. The robotic weapons system will then decide whether to open fire and kill… or not. This is the cutting-edge frontier of combat decision-making technology, but barely in the news.
My intent here is not to polarise the debate or demonise the respective players in this arena.
The question is: are we ready to, and should we ever, hand over those decisions of life and death to a machine, even with tokenistic human oversight?
The US Department of Defense gives guidelines designed to minimise the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended killings. The developers of Atlas state they will operate within these guidelines. Although, many of these appear to allow a certain measure of flexibility and vagueness in precisely how they apply on the ground.
Atlas is not the first. In 2007, an immature variant of Atlas technology was deployed in Iraq, called Swords (Special Weapons Observation Remote Reconnaissance Direct Action System) – autonomous mini tank systems mounted with machine guns. This project was terminated after the units moved in erratic unanticipated ways, with obvious dangerous consequences in a firefight. Although it is stated the units never live fired, there has been considerable misinterpretation, confusion and misinformation in the press, and official silence, on the series of events and subsequent project developments that took place. Transparency and military research rarely make comfortable bed fellows.
Last April, the United Nations held its first meeting on Lethal Autonomous Weapons Systems (Laws) and proposed banning their use internationally. This motion was defeated due to opposition from Israel, the US and Russia, all of whom have invested deeply in AI weapons systems.
What is sorely needed is a promotion of public awareness and education regarding these recent robotic developments, combined with a frank public discussion around their design, purpose, need, cost, implementation and potential consequences. These topics should be trending in mainstream news, not only in niche interest publications, in order to create the public-political pressure required for transparency around the use of military robotics and AI in war zones.
The consequences of robots self-learning and making independent life and death decisions in combat are profound, and worthy of intense scrutiny and debate. Machine error resulting in unintended loss of life is effectively manslaughter, but who gets charged? The operator? The government? The designer?
Robotics are changing, rapidly, but important recent developments rarely make mainstream news.
We know from recent controversial failed attempts at facial recognition that current artificial intelligence systems are far from infallible at accurately identifying people even in non-lethal scenarios. On the other hand, the removal of human bias and error may save lives in stressful combat zones. Robotic systems may identify combatants from non-combatants faster and more accurately than a human operator ever could.
The question is: are we ready to, and should we ever, hand over those decisions of life and death to a machine, even with tokenistic human oversight?
Some people suggest that if ethics were encoded into the programs then the weapon systems could not fire on innocent targets even if commanded to do so by a human. But, who writes and encodes the ethics? And who’s ethics?
I am not wishing to delve into futuristic ‘Terminator’ like scenarios, but such questions have immediate implications for academic research, military meetings and international guidelines all happening in the present and shaping our future.
Public education, discussion and consultation needs to play a greater part in this process.