Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
The Government risks leaving UK elections wide open to artificial intelligence-generated misinformation as it focuses on longer-term concerns over robots wiping out humanity, a new report has suggested.
Following the PM’s gathering of AI and tech bosses at Bletchley Park in November – which led to a voluntary agreement on sharing AI models with regulators – the report argues: “There is a risk that these actions focus so much on the potential existential risks of AI that they miss the more immediate risks that are coming down the track on election integrity in the 2024 bumper year of elections.”
This year sees pivotal elections in the UK, India, the European Union, the US and more – and experts fear that hostile actors will seek to sow mistrust, apathy and division through the abuse of new generative AI tools and the creation of embarrassing ‘deepfakes’ of politicians to swing elections.
During the last Labour Party conference, deepfake audio clips of Labour leader Keir Starmer were circulated on social media which purported to show him verbally abusing party staffers and criticising the city of Liverpool where the conference was being held.
Don’t miss a story
Sign up to the Behind the Headlines newsletter (and get a free copy of Byline Times in the post)
Around the same time, elections in Slovakia were rocked when a fake audio recording was created of Michal Simecka, the leader of the Progressive Slovakia Party, discussing how to “rig” the general election. It was credited with contributing to her surprise loss just days later.
Argentina’s November 2023 election was dubbed by some the “first AI election” in which both candidates used AI extensively to generate images of their opponent, the think tank Demos notes.
The warning comes in a paper for the policy group, Generating Democracy: AI and the Coming Revolution in Political Communications, by Alison Dawson and James Ball.
The report calls for UK parties to jointly commit to not produce or share content they believe is misleading, AI-generated material, such as faked video or audio or doctored images in this crucial election year.
It’s also likely that AI tools will be used for “micro-targeting” certain groups of voters in the upcoming election – where personal data is analysed to “identify and tailor messages towards the interests of a particular audience”. Again, this could be done at unprecedented speed and scale using AI tools.
“If you had a psychological profile of each voter to target them individually, you can expedite this with AI,” Dr Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy at the Oxford Internet Institute told the report’s authors.
Another public relations expert told the authors the use of AI in the communications industry is “about to explode”.
Polling conducted by YouGov on behalf of Cavendish for the study found that the overwhelming majority (80%) of MPs in the UK say they have never used AI in their work.
A tiny proportion – 3% – said that they had used it for social media posts while a further 3% said they used it for campaign materials such as leaflets. Demographic surveys suggest it is likely that their younger staffers are however using tools like ChatGPT in their work.
During a recent House of Commons debate on AI, MP Matt Warman claimed that other MPs have “confessed” to him that they have used AI to write their speeches.
However, the report argues that many content producers are operating in grey areas where there are no set rules but norms are being established.
And where organisations have attempted to be transparent in their use of AI tools, it has sometimes cost them. Human rights group Amnesty International used AI generated photos depicting protests in Colombia.
“They said this was to protect protesters from retribution and included text saying the images were AI-generated. [But] they faced backlash for the use of these images and removed them from social media, suggesting that even when organisations use such images with transparency and good intentions they can still face criticism,” the authors write.
It suggests that without regulation or firm agreement from political communicators, the reputational cost of revealing that content was produced by artificial intelligence might be too high.
And while most mainstream AI tools such as ChatGPT and Midjourney have some safeguards protecting against real public figures’ images or videos being significantly altered to create fake footage or stills of them in compromising situations, there are many unregulated tools online.
In a 2023 article for the tech magazine Wired, Professor Kate Starbird wrote that communicators – including those in politics – can use generative AI to “write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.” This may be an understatement given the November launch of new “Turbo” GPT models.
The Demos report notes that generative AI could be used to create fake content “quickly and at scale”.
There is almost no legal regulation on generative AI, as the major planks of tech legislation was drafted in the 2000s, and the Government’s flagship Online Safety Bill has yet to come into effect. The Online Safety Act will eventually force tech firms to enforce their own safety standards. It is not clear how this applies to online tools that lack safeguards altogether.
Subscribers Get More from JOSIAH
Josiah Mortimer also writes the On the Ground column, exclusive to the print edition of Byline Times.
So for more from him…
Do you have a story that needs highlighting? Get in touch by emailing email@example.com