Byline Times is an independent, reader-funded investigative newspaper, outside of the system of the established press, reporting on ‘what the papers don’t say’ – without fear or favour.
To support its work, subscribe to the monthly Byline Times print edition, packed with exclusive investigations, news, and analysis.
Voters in Germany were exposed to a significant amount of far-right narratives online during the federal election, driven by AI-generated content and Russian disinformation campaigns, research has shown.
The findings come amid calls for the Government to acknowledge the impact it had on delivering a record result for the far right. Alternative for Germany (AfD) — which opposes sending weapons to Ukraine and has called for an end to sanctions on Moscow — secured a historic second-place finish on Sunday with almost 21%.
Konstantin von Notz, a Green member of parliament who chairs the committee that oversees the German intelligence services, told the Financial Times that it was impossible to say exactly how many votes were swayed by the content.
However, while not calling for the result to be overturned, Von Notz, told the publication that “what you can say for sure is that there was relevant, illegitimate influence on decision-making processes”. He added that “we simply have to recognise that our elections are already manipulated — and successfully manipulated”.
The MP expressed further alarm that the Afd, together with the far-left Die Linke, would be able to form a “blocking minority” in the next parliament.
Experts monitoring social media during the German elections identified the involvement of Russian-based groups such as “Doppelganger” and “Storm-1516,” which US officials had found to be active in America’s previous election.
These campaigns utilised artificial intelligence to spread their messaging ahead of the vote, which ultimately saw Germany elect a new Bundestag.
Methods employed in these disinformation campaigns included creating fake TV news stories and deep-fake videos featuring fabricated accounts from “witnesses” or “whistleblowers” about prominent politicians.
In November 2024, shortly before the snap election was called, a video surfaced claiming that a parliamentary member who was an outspoken supporter of Ukraine was a Russian spy. The video used AI to suggest that a former adviser was making the accusation.

In another incident, a video featuring an 18-year-old woman falsely accused a German minister of child abuse. This video, also created using AI, was part of a broader disinformation campaign that plagued the election period.
Beyond the extremes of Russian-led disinformation campaigns, far-right groups within Germany also ramped up their online presence.
Larissa Wagner, an AI-generated social media influencer, became a notable figure in this regard. On 22 September 2024, the day of the Brandenburg state election, Larissa posted a video to her X account saying, “Hey guys, I’m just on my way to the polling station. I’m daring this time. I’m voting for AfD.”
Larissa’s accounts on X and Instagram were both created in the last year, and her regular videos espoused far-right narratives, such as telling Syrian immigrants to “pack your bags and go back home.” She even claimed to have interned with the right-wing magazine Compact, which was banned by the German Government in the previous year.

When Sky News messaged ‘her’ on Instagram to inquire about her creators, Larissa responded, “I think it’s completely irrelevant who controls me. Influencers like me are the future… Like anyone else, I want to share my perspective on things. Every influencer does that. But because I’m young, attractive, and right-wing, it’s framed as ‘influencing the political discourse’.”
While Larissa only had 680 followers on Instagram, she had 5,400 on X and a blue tick, enabling her to reach a far bigger audience by paying for it. Her most recent posts have racked up thousands of views.
Her final post on X before the election contained AI generated clips interspersed with content from the news. This content made both sides of the AfD’s vision—the idyllic, nostalgia-driven future it promised, as well as the dystopian one it warned about if others won the election—appear startlingly real.
The hard right’s use of generative AI on social media extends beyond characters like Larissa. During the German election period, AI-generated content played a significant role in bolstering Alice Weidel’s anti-migration, populist Alternative for Germany party (AfD).
Top politicians in AfD posted and shared a slew of AI-generated images and videos, contributing to an entire online ecosystem of AI-generated content supporting the party. Norbert Kleinwächter, a parliamentarian for the AfD, was among the party’s most active posters of AI-generated content.
The national party’s website featured a helmet-clad coal miner, a carefree woman dancing, and a family all smiling, all of which were AI-generated and were accompanied by simple slogans, “Now I get my freedom back” and “Now is the time to return”. These stock images depicted the kind of “regular” Germans who supported the AfD and were typical of much of the AI-generated content the party used.
As well as using the content to portray their supporters, the party also used it to visualise their promise to return the country to an idealised, better time and to prevent what they saw as a dark future brought about by increased immigration.
The party is open about using AI for its Instagram account. A video from the regional-level AfD in the eastern German state of Brandenburg, published ahead of the state elections last autumn, illustrated this well by harkening back to a supposed golden age.
The voice over in the video said, “Your ancestors had a homeland in Brandenburg—this is where your grandpa took your grandma dancing, this is where your mother went to kindergarten, this is where your grandpa built his house. This is where your parents brought you into the world and made you the person you are today.”
In these posts, women walked down the street in burqas, and groups of men with dark skin leered directly at the viewer, whereas elderly Germans gathered plastic bottles to make ends meet or looked despairingly at empty wallets. All AI-generated.
A report from the Institute for Strategic Dialogue identified 883 posts since April 2023 that included images, memes, and music videos made using generative AI. The posts originated from far-right supporters and the AfD, with party accounts publishing more than 50 generative AI content posts in October 2024 alone.
AI content is meant to be identified as such. There should be warnings on Instagram, for example, to flag when something is made with generative AI. This doesn’t always happen, only if enough users report the post as ‘made with AI’.
Far-right groups exploited the weak enforcement of the EU’s new Digital Services Act and the limitations in the AI Act to disseminate AI-generated content. Barely any of the posts have been taken down.
AfD are not alone in using AI. The Republicans used it to show what the US would have been like under a new Biden presidency.
ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE
Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.
We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.
AfD chose instead to show how happy Germans would be under their leadership and how disastrous the CDU would be for the country. The difference is the bold use of AI that AfD has used.
AI clips on Instagram, AI posters, and one now disbanded AfD youth group used generative AI to make songs and music videos about political policies. It is by far the most extensive use of AI compared to mainstream political parties across Europe.
I wonder if now that this watershed moment has been breached, other parties will begin using AI more openly. Politicians, campaigns and voters are using AI to express thoughts that they could otherwise not explain or to show hypotheticals. It is very easy to slip into negative campaigning and who does using this method of generative AI harm the most? The voters.