Free from fear or favour
No tracking. No cookies

‘A Voluntary Tech Agreement Will Do Little to Defend Electoral Integrity Against the Worst Uses of AI’

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections is well-meaning but lacks detail and urgency, argues Emma DeSouza

Mark Zuckerberg, founder of Meta, at a Senate Judiciary Committee hearing at the US Capitol, on 31 January 2024. Photo: Sopa Images/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

Across the globe, an estimate four billion voters will be heading to the polls this year – more than any other in human history. All the while, rapid advancements in artificial intelligence continue to draw us deeper into an era of near-indiscernible deepfakes.

As the quality, output and volume of AI-generated language, image, video, and audio advances at pace, major players in machine learning and social media have publicly committed themselves to jointly combatting the deceptive use of this rapidly advancing field of technology.

But can the threat AI poses to democracy be stymied in time?

AI images are being generated at an approximate rate of 34 million per day – that’s almost 400 images a second. In the span on 18 months, 15 billion AI generated images were created – a figure which took photographers 149 years to achieve.

The scale and quality of AI images is not going to roll backwards. Machine learning is becoming increasingly sophisticated, with AI-images permeating virtually every area of life – from social media, to marketing and news. 

AI technology can capture any person’s likeness and create an entirely fabricated action for the subject to perform in either an image or a video. The technology can replicate any person’s speech and cadence and can read a script of the user’s design in their voice.

In the context of elections, fake media can fuel disinformation and misinformation about political leaders, which could influence how people vote. In the UK and Ireland, we have already seen AI-generated videos circulating of Prime Minister Rishi Sunak, Labour Leader Keir Starmer, and Ireland’s Taoiseach Leo Varadkar. 

Beyond explicitly visual elements, instances of insidious AI-generated websites proliferating fake news have skyrocketed, with an increase of more than 1,300% since last May, ballooning from 49 to 713 sites in less than a year, according to disinformation tracker Newsguard.

Sunak Focuses on Tiny-Chance of ‘Extinction by Robots’ While Elections Remain Wide-Open to AI Deepfakes and Disinformation — Report

While tech bosses and the PM concentrate on what could happen decades from now, artificial intelligence is already shaping our politics.

Advances in AI-generated audio have resulted in fake calls intended to interfere in elections. In January, voters in New Hampshire received calls from an AI-bot imitating US President Joe Biden’s voice.

When it comes to video content, many of us will have seen the largely benign 2021 Tom Cruise deepfake but, in the years since, this technology has become increasingly photorealistic – and weaponised.

AI technology has the potential to significantly disrupt democratic processes and artificially sway electoral outcomes to a degree never before seen. Beyond proliferating deceptive imagery, video and audio content, the technology is capable of eroding public trust in political institutions and manipulating political priorities.

Politicians, in an election year at least, are responsive to the priority issues that matter to constituents – but what if a large-language model were deployed to write tens of thousands of fake constituent letters on an issue that was not a priority for voters in that area?

Tech giants including the CEOs and creators of some of the most well-known AI service providers have repeatedly expressed concern that AI could pose a significant risk to humanity. This risk is not of the ‘Skynet’ variety, but rather the more subtle societal impacts of AI technology once weaponised to alter the landscape of the society we live in – whether that’s by electoral interference, mass disruption to the labour market through technological advances, or large-scale discrimination and racial profiling.

These risks are very real, and effective safeguards are not in place to prevent interference in the democratic outcomes of elections occurring across 2024.

Recognising the looming threat it poses toward democracy, 20 tech firms – including Google, Microsoft, Meta, and Open AI – have committed to work collectively in good faith to tackle deceptive AI in elections.

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference last week and includes several voluntary commitments and guiding principles. 

The Accord recognises that there are limitations and clearly tries to strike a balance between free speech and deliberately deceptive AI, stating that companies will deploy “reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated”, that this won’t prevent AI content in relation to elections, and as deliberate intent forms part of the review process, content could easily slip through.

Detection, education, and provenance are core planks of the agreement, with plans to deploy the same machine learning technology to instead identify and detect AI-generated election content which may be deceptive or harmful, this can be done by “developing classifiers or robust provenance methods like watermarking or signed metadata”.

Meta has previously announced that it will be watermarking AI-generated content on its platform ahead of the elections – it remans to be seen as to how effective watermarking will be, but it is a welcome step in helping the public discern between real, and machine-created, content. 

The intent behind the voluntary agreement is well-meaning and suggests an appetite for collaboration among key players. “We have a responsibility to help ensure these tools don’t become weaponised in elections,” said Brad Smith, Microsoft president.

However, many of the commitments remain vague and toothless.

In reality, this voluntary agreement is ultimately one more pact to heap atop the dusty pile of other voluntary agreements, between countries, tech firms, and key players – including the 20-page non-binding agreement between 18 countries including the UK and the US announced last November, itself atop an earlier US-UK pact announced last May.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

There is no shortage of voluntary code, flowery agreements, or handshakes – but they do little to provide tangible guardrails.

The Munich Accord identifies education as one of its key pillars, exemplifying the importance it places on citizens building the skills necessary to better protect themselves from manipulation. Yet it lacks the finer detail as to how exactly tech companies will achieve that goal, stating only that the signatories will “support educational campaigns” – exactly how robust this support would be is left open to interpretation. 

Digital literacy skills are some of the most important tools for personal protection against misinformation and disinformation in the rapidly evolving digital landscape. The challenge is that these skills are not compulsory in education systems and the vast majority of potential voters who stand to benefit from proper training have already left the education system without having been adequately prepared to detect the key markers of AI-generated content. 

While the technology is becoming increasingly sophisticated, it is still possible to recognise markers. For example, the responses from large language models such as Open AI’s Chat GPT have some unambiguous ‘tells’ if you know what to look for. Most of us, however, haven’t been explicitly shown what form they take.

AI technology is ramp for misuse. It is the most effective and accessible tool for nefarious actors to stoke division, polarisation, and extremist policies, requiring an inversely proportional degree of effort and technical skill to harness it. While the Munich Accord aspires to counter disinformation, the policies outlined will do little to defend electoral integrity.

Machine-learning generated content is hidden in plain sight, with most of us will be engaging with some form of AI-generated content on a daily basis. Democracy has been under threat for decades, from autocrats to populists. But AI is the Goliath. 


Written by

This article was filed under
, , , ,