Newsletter offer
Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
Britain’s next elections face a major threat of being swayed by convincing AI-generated ‘deepfake’ audio and video of political figures, according to artificial intelligence experts have told Byline Times.
The past couple of months has already seen at least two convincing AI-generated examples of politicians saying things they never said – both targeted at the Labour party.
In early October, a Twitter/X account posted what appeared to be audio of Labour leader Sir Keir Starmer abusing his party staffers.
However, before it was confirmed as fake, it was shared by at least one national journalist, at the Express newspaper, as if it were real.
Two Labour-facing blogs also shared it suggesting it could be real. Socialist site Skwarkbox wrote a piece stating: “The Labour party has failed to deny the authenticity of an alleged recording of Keir Starmer in a foul-mouthed tirade against a hapless 16-year-old intern” because the press office had not responded to their inquiry after three hours.
Another left-wing site, VoxPolitical, wrote: “An audio file has been released, purportedly of Keir Starmer bullying Labour Party staffers. A check on whether the file was computer-generated gives a more-than-90-per-cent probability that the voice on the recording is human, although we cannot be entirely sure that it is Starmer’s.”
Don’t miss a story
Even after the audio was debunked by AI experts and fact-checkers FullFact, some continued to sow doubt. A piece in DorsetEye stated: “It appears from now on that anything the establishment does not like, they will blame on AI. In some cases, they may have a point, but in others, they will be making it up.”
Then, just last week, clips circulated on social media of London mayor Sadiq Khan appearing to play down the importance of Remembrance commemorations, ahead of Saturday’s Palestine protest. A voice similar to Mr Khan could be heard saying: “I don’t give a flying s*** about the Remembrance weekend.”
City Hall was livid, as the narrative that Sadiq Khan opposed Remembrance won wider traction. The deepfake videos were shared extensively in far-right networks before being debunked.
The Met Police investigated complaints – and then dropped it. In a statement, the force said: “Specialist officers have reviewed this video and assessed that it does not constitute a criminal offence.”
That, perhaps, is the most worrying part of the saga.
Subscribers Get More from JOSIAH
Josiah Mortimer also writes the On the Ground column, exclusive to the print edition of Byline Times.
So for more from him…
Unprotected
The costs to entry for creating and disseminating false video and audio of politicians to stir division, anger and confusion are now next to nothing. And there is no punishment for doing so.
At this point, it becomes an extremely attractive proposition for hostile states to interfere in our elections. We do not know the sources of the Starmer and Khan deepfakes this month, but hostile state-linked actors haven’t been ruled out. Equally, they could be from basement-dwelling trolls with too much time on their hands. Whatever the case, sanctions appear non-existent.
Many more of these will be created, likely in such increasing frequency that fact-checkers will not be able to check them all.
Andy McDonald is a fellow at the London College of Political Technology, at Newspeak House, focusing on artificial intelligence. He tells Byline Times that from a regulation perspective, it’s “nigh on impossible” to stop people creating political deepfakes.
AI-created audio may be the hardest to spot. “It’s easy to mask AI glitches on audio because if your video model isn’t that good, you can see the breakup in some of the words, and that the voice manipulation just isn’t working.
“But if you’ve got background chatter, or music in the background, you can make it seem like it’s been recorded in a hall or in a studio. It’s a lot easier to cover up.”
As far as he’s concerned, the law is non-existent on political deepfakes. “Regulations would have to be on broadcasters or news providers to double down on ensuring verification and double sourcing.” Asked if hostile states will become more involved in creating AI-generated fakes, he is clear: “Look how the Russians circulated misinformation in the 2016 US presidential election. It will be insane given the tools they have now.”
AI expert Professor Shweta Singh at Warwick Business School says there’s ambiguity about whether political deepfakes are illegal. Unlike sharing intimate or sexual deepfakes, there is no specific law against ones aimed at spreading misinformation, an omission that extends to the recently-passed Online Safety Act.
“The harm caused by pornographic images is quite clear and direct. But political harm is harder to define. We have to protect democracy – but how do we know which party is harming/benefiting? That’s hard to define in law.”
Voluntary Commitments
Social media giants say they are acting on the issue. Facebook and Instagram – owned by tech monolith Meta – are now requiring advertisers to make clear when deepfakes have been used. Google has implemented similar transparency rules.
Seven major tech companies – including OpenAI, Amazon and Google – have also committed to “watermarking” tools to be incorporated into their AI content, so it is possible to detect what has been AI-generated. But it’s only a voluntary commitment, and for now foolproof watermarking tech simply does not exist. “It’s not foolproof commitment in any way,” Prof Singh tells me.
“Even if they fulfil their commitment, there is no standard for watermarking. Each company can use their own method. [AI image generator] DALL-E uses a visible watermark – but you can do a quick search to figure out how to easily remove it.”
And there are many other platforms than Facebook and Google ads for deepfakes to spread. “You can email people deepfakes. They could be shared in private chats. There are lots of other platforms than the seven signed up for watermarking. I don’t think any of these commitments will work the way they should.”
“We need responsible AI solutions that go beyond just watermarking. We have to come up with clever, quick solutions…The law is not a quick fix,” the AI expert says.
Prof Singh is calling for new mechanisms to be rolled out to identify deepfake video and audio to catch them at their inception – and prevent their spread. It could work in a similar way to email spam filters: content could be flagged as being ‘likely AI-generated’.
And a big gap for UK politics remains the fact that X/Twitter – run by libertarian billionaire Elon Musk – hasn’t yet come up with a policy saying people have to disclose what is AI-generated content. “If a tweet becomes viral, that can have more impact than sharing an email, or a paid ad. A tweet might swing me more than a Google ad or email,” Prof Singh says.
“It goes without saying that hostile states will use these tools…Bad actors can use AI to meddle in elections and nudge voters.” Even if content is eventually detected as being AI generated, often the damage will already have been done. Distrust and uncertainty has been sown.
“What exactly is an ad? It’s to put something into the back of your mind. Eventually it moves to the foreground and changes your judgements and decisions,” says Singh. In sum, flagging something as AI-generated does not mean it will no longer have any effect.
Interference is Inevitable
Rishi Sunak has trumpetted his “Bletchley Declaration” – an agreement of 28 countries signed at his AI Safety Summit at the start of November. But there is no mention of the threat of AI-generated misinformation in the accord, and it was barely a footnote in the gathering itself. Instead, Sunak seemed more keen to cosy up to Elon Musk – a CEO who has appeared to let misinformation run rife on X since his takeover last year.
“If we can’t find a solution, it will be impossible to not have our elections meddled by AI and deepfakes,” Singh says. Despite all this, she is confident that a range of tools can and will be put out to limit the risk.
Her message to news outlets – and voters too – is for them to be much more careful than they currently are. “Think before you share anything. First make sure something is true. We will still see stuff that will sway us – because what we see is what we believe. But now what we see is not necessarily going to be true. We have to reprogram our own brains.”
A Government spokesperson admitted to Byline Times that deepfakes peddling misinformation “pose a severe threat to our democracy and freedoms.” They said the Government is “committed to firmly responding to any threats to the UK’s democracy.”
“Under the Online Safety Act, the largest social media platforms must remove illegal content when they become aware of it, including the unlawful use of deepfakes or manipulated media.”
However, we do not yet have confirmation that political deepfakes are actually illegal.
The Government spokesperson claimed that an updated “false communications” offence included in the Online Safety Act will also “capture any manipulated media where the sender of such content is aware it is untrue and intends to cause harm to the recipient.”
The Met Police’s judgement appears to be that the bar for illegality is very high indeed.
We are in uncharted waters. From now on, we will all have to get used to the fact that seeing is not believing anymore.
Do you have a story that needs highlighting? Get in touch by emailing josiah@bylinetimes.com