Newsletter offer
Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
The UK’s election watchdog has issued a warning that it has no powers to tackle ‘deepfake’ content, leaving Britain open to AI-generated fake videos of politicians warping the UK’s next election debate and outcomes.
It comes after Byline Times reported that Britain’s next elections face a major threat of being swayed by convincing AI-generated ‘deepfake’ audio and video of political figures, according to artificial intelligence experts who spoke to this newspaper. .
The past couple of months have already seen at least two convincing AI-generated examples of politicians saying things they never said – both targeted at the Labour party. In early October, a Twitter/X account posted what appeared to be audio of Labour leader Sir Keir Starmer abusing his party staffers. It was followed shortly after by a deepfake audio of London Labour mayor Sadiq Khan appearing to disrespect Remembrance commemorations.
However, the law is unclear as to whether deepfake videos and audio of political figures is illegal. It is possible it falls under new malicious communications measures in the Online Safety Act. However, the Met Police quickly dropped an investigation into the Sadiq Khan deepfake, saying it did not constitute a criminal offence.
Don’t miss a story
Experts worry that AI-generated fake videos of politicians could go viral before the next election, with the costs of producing convincing fake videos of public figures saying or doing scandalous things now close to zero. Many in the field believe hostile state actors will inevitably use emerging technology to try and sway UK elections and sow disruption.
Since the beginning of November 2023, campaigners are required to include an “imprint” saying who published certain political campaign materials online. But they are not required to disclose whether content is AI-generated and the Electoral Commission cannot sanction or take down misinformation. Ofcom has more powers in this area, but again suffers from the law being unclear on AI-generated political misinformation and deepfakes.
A spokesperson for the Electoral Commission told Byline Times: “We don’t have a remit on deepfakes or the content of campaign material, as we’re responsible for regulating party and campaigner finance as well as compliance with the digital imprint requirement.”
The elections watchdog said that it is working with partners and other regulators to “better understand both the opportunities and the challenges that this technology can pose to elections.”
They noted that campaigners are free to use deepfake content to sway elections, adding: “We support voters to think critically about information they see online and check anything they think might be disinformation, offering information on our website about how to identify fake news, and which regulators you can take concerns to.”
The Commission is now calling for the UK Government to consider strengthening the powers of UK regulators, including the Commission, “so they are equipped to deal with future challenges.” The Commission wants powers to obtain information from social media and technology companies, and online payment providers, most likely looking at who is funding political misinformation.
Tech Firms Race to Catch Up with Threat
Byline Times contacted several tech giants to ask what policies they had in place to protect against harmful political deepfakes.
A TikTok spokesperson told this outlet that it prohibits all political advertising. For ‘organic’ (non-ad) content, synthetic media or manipulated content that shows realistic scenes must be clearly disclosed through the use of a sticker or caption, “fake” or “altered”, the social media firm said. The spokesperson added: “We do not allow synthetic media of public figures if the content is used for endorsements or violates any other policy.
“We have strict policies on misinformation, which is defined as content that is inaccurate or false. We will remove misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent.
“Significant harm” includes “undermining of public trust in civic institutions and processes such as governments, elections, and scientific bodies,” the TikTok spokesperson said.
Google/Alphabet told Byline Times much of its research is currently focused on tackling misinformation. The firm says it is experimenting with how to integrate new innovations in “watermarking” (embedding information that makes clear content is AI-generated) in some of its AI content generation tools.
“As we continue to roll out generative AI features, we’ll ensure that every one of our AI-generated images has a markup in the original file so that it’s possible to determine whether the image is AI generated,” a spokesperson said.
In August, Google announced SynthID, a tool that allows users to embed an “imperceivable” digital watermark into AI-generated images and identify if Imagen, one of its text-to-image models, was used.
The tech giant says it has trained a tool that can detect synthetic speech with “nearly 99% accuracy.”
Google says that in mid-November this year, it is updating its Political Content policy to require that all verified election advertisers in regions where verification is required (including the UK) must “prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events.” The disclosure must be “clear and conspicuous”, applying to image, video and audio ads.
A new “About this image” tool in Google Image search also lets people check when an image was first published online, to counter photos being presented as “new” when they are not, and see them presented alongside webpages they previously appeared in.
‘Technically manipulated content’ on Google-owned YouTube is officially banned where it “misleads viewers and may pose a serious risk of egregious harm.” Google says it removes such content.
Google says YouTube will require creators to “disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.”
The risk is that AI-generated video realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do – particularly sensitive in the days before an election.
Meta/Facebook did not respond to requests for comment. Twitter/X’s press office simply replies with an automated response.
Subscribers Get More from JOSIAH
Josiah Mortimer also writes the On the Ground column, exclusive to the print edition of Byline Times.
So for more from him…
Do you have a story that needs highlighting? Get in touch by emailing josiah@bylinetimes.com