Free from fear or favour
No tracking. No cookies

Sunak Focuses on Tiny-Chance of ‘Extinction by Robots’ While Elections Remain Wide-Open to AI Deepfakes and Disinformation — Report

While tech bosses and the PM concentrate on what could happen decades from now, artificial intelligence is already shaping our politics.

Prime Minister Rishi Sunak and Elon Musk, CEO of Tesla, SpaceX and X.Com in-conversation at the conclusion of the second day of the AI Safety Summit on the safe use of artificial intelligence last year. Photo: PA Images/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

The Government risks leaving UK elections wide open to artificial intelligence-generated misinformation as it focuses on longer-term concerns over robots wiping out humanity, a new report has suggested. 

Following the PM’s gathering of AI and tech bosses at Bletchley Park in November – which led to a voluntary agreement on sharing AI models with regulators – the report argues: “There is a risk that these actions focus so much on the potential existential risks of AI that they miss the more immediate risks that are coming down the track on election integrity in the 2024 bumper year of elections.”

This year sees pivotal elections in the UK, India, the European Union, the US and more – and experts fear that hostile actors will seek to sow mistrust, apathy and division through the abuse of new generative AI tools and the creation of embarrassing ‘deepfakes’ of politicians to swing elections. 

During the last Labour Party conference, deepfake audio clips of Labour leader Keir Starmer were circulated on social media which purported to show him verbally abusing party staffers and criticising the city of Liverpool where the conference was being held.

Don’t miss a story

Sign up to the Behind the Headlines newsletter (and get a free copy of Byline Times in the post)

Around the same time, elections in Slovakia were rocked when a fake audio recording was created of Michal Simecka, the leader of the Progressive Slovakia Party, discussing how to “rig” the general election. It was credited with contributing to her surprise loss just days later. 

Argentina’s November 2023 election was dubbed by some the “first AI election” in which both candidates used AI extensively to generate images of their opponent, the think tank Demos notes. 

The warning comes in a paper for the policy group, Generating Democracy: AI and the Coming Revolution in Political Communications, by Alison Dawson and James Ball.

The report calls for UK parties to jointly commit to not produce or share content they believe is misleading, AI-generated material, such as faked video or audio or doctored images in this crucial election year.

It’s also likely that AI tools will be used for “micro-targeting” certain groups of voters in the upcoming election – where personal data is analysed to “identify and tailor messages towards the interests of a particular audience”. Again, this could be done at unprecedented  speed and scale using AI tools. 

EXCLUSIVE

Elections Watchdog is ‘Powerless’ to Stop AI-Generated Deepfakes From Sabotaging the General Election

UK authorities have alarmingly few powers to prevent bad actors from interfering with our democracy

“If you had a psychological profile of each voter to target them individually, you can expedite this with AI,” Dr Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy at the Oxford Internet Institute told the report’s authors. 

Another public relations expert told the authors the use of AI in the communications industry is “about to explode”. 

Polling conducted by YouGov on behalf of Cavendish for the study found that the overwhelming majority (80%) of MPs in the UK say they have never used AI in their work.

A tiny proportion – 3% – said that they had used it for social media posts while a further 3% said they used it for campaign materials such as leaflets. Demographic surveys suggest it is likely that their younger staffers are however using tools like ChatGPT in their work. 

During a recent House of Commons debate on AI, MP Matt Warman claimed that other MPs have “confessed” to him that they have used AI to write their speeches. 

However, the report argues that many content producers are operating in grey areas where there are no set rules but norms are being established. 

And where organisations have attempted to be transparent in their use of AI tools, it has sometimes cost them. Human rights group Amnesty International used AI generated photos depicting protests in Colombia.

“They said this was to protect protesters from retribution and included text saying the images were AI-generated. [But] they faced backlash for the use of these images and removed them from social media, suggesting that even when organisations use such images with transparency and good intentions they can still face criticism,” the authors write. 

‘Despite the Headlines and Handshakes, We Are No More Prepared for the Tidal Wave of AI’

Tackling the issue will require global cooperation and legislative policies – but advancing from non-binding commitments can take decades, writes Emma DeSouza

It suggests that without regulation or firm agreement from political communicators, the reputational cost of revealing that content was produced by artificial intelligence might be too high. 

And while most mainstream AI tools such as ChatGPT and Midjourney have some safeguards protecting against real public figures’ images or videos being significantly altered to create fake footage or stills of them in compromising situations, there are many unregulated tools online. 

In a 2023 article for the tech magazine Wired, Professor Kate Starbird wrote that communicators – including those in politics – can use generative AI to “write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.” This may be an understatement given the November launch of new “Turbo” GPT models. 

The Demos report notes that generative AI could be used to create fake content “quickly and at scale”. 

There is almost no legal regulation on generative AI, as the major planks of tech legislation was drafted in the 2000s, and the Government’s flagship Online Safety Bill has yet to come into effect. The Online Safety Act will eventually force tech firms to enforce their own safety standards. It is not clear how this applies to online tools that lack safeguards altogether. 


Subscribers Get More from JOSIAH

Josiah Mortimer also writes the On the Ground column, exclusive to the print edition of Byline Times.

So for more from him…


Do you have a story that needs highlighting? Get in touch by emailing josiah@bylinetimes.com


Written by

This article was filed under
, , , ,

Subscribe to Byline Times

This website is free. We don’t have a paywall, there are no ads, we don’t profile you with intrusive analytics or track you with cookies. Unlike most UK papers, Byline Times is subscriber-funded. Our team is small, we keep overheads low, we pay journalists fairly… and we pay our taxes in the UK.

An easy way to support us is to receive our newsletter emails (and install our app, for iOS or Android); we gain insight into our readership, and you make sure you don’t miss vital news.

Subscribing to our print newspaper (from £3.75/month) is the best possible support for our journalism. We also sell gift vouchers and books.