5 Shocking Cases of AI-Generated Deepfakes Interfering in Global Politics
Deepfake technology is all fun and games until it falls into the hands of a bad actor. Thanks to the ongoing evolution of generative AI services, deepfakes are becoming more accessible and more sophisticated every day. As technology continues to evolve, it’s becoming harder and harder for human reviewers and professional fact-checkers to tell the difference between what’s real and what’s fake.
As a result, cases of deepfake cybercrime are becoming more common, with deepfake technology being exploited to improve the success rates of identity and financial fraud, spread misinformation, blackmail individuals or businesses, manipulate public opinion, impersonate brands, create explicit content of unconsenting individuals, carry out corporate espionage, or interfere in politics.
Over the past few years, the world has witnessed real-life cases of how the growing sophistication and accessibility of AI can erode trust in democratic processes, with deepfakes being used to influence public opinion during elections and sway voters.
A 2024 study conducted for The Guardian found that only one in four registered US voters have strong confidence in their ability to tell the difference between real and AI-generated visual content, posing giant risks to information integrity in political contexts.
We expose 5 of the most shocking cases of deepfake technology interfering in global politics.
1. Deepfake Zelenskyy announces Ukraine’s surrender
In March 2022, a deepfake video of Ukrainian President Volodymyr Zelenskyy told Ukrainian soldiers to lay down their arms and surrender the fight against Russia.
While the video circulated on social media, hackers even sent the fake Zelenskyy message across live television. It was also published on a Ukrainian news website before being debunked and removed.
While official at Facebook, YouTube and Twitter said the video was removed from their platforms for violating policies, the video was allegedly boosted on Russian social media.
“This is the first one we’ve seen that really got some legs, but I suspect it’s the tip of the iceberg,” said Hany Farid, a professor at the University of California, Berkeley who is an expert in digital media forensics, told NPR when the incident occurred in 2022.
In a video posted to his Telegram channel, Zelenskyy responded to the fake video by saying: “We are defending our land, our children, our families. So we don’t plan to lay down any arms. Until our victory.”
While the quality of the 2022 deepfake wasn’t very sophisticated, deepfake technology has come a long way since, and deepfakes continue to become more and more convincing every day.
Researchers commented that the circulation of such videos could lead to people questioning the veracity of real videos of President Zelenskyy in the future.
2. Deepfake video accuses Minnesota governor and former vice presidential candidate Tim Walz of sexual assault
In October 2024, a Russian-aligned propaganda network known as Storm-1516 reportedly orchestrated a disinformation campaign that fabricated and disseminated false allegations accusing Tim Walz of sexually assaulting a former student during his tenure as a high school teacher.
Storm-1516 is notorious for creating deepfake whistleblower videos and allegedly pushed at least 50 false narratives leading up to the the November 5th 2024 US elections. The same group was also linked to false claims that former vice president Kamala Harris perpetrated a hit-and-run that left a woman paralyzed in San Francisco in 2011.
A video claiming to show a former student of Walz describing abuse by the former football coach spread widely on X after being shared by a prominent anonymous QAnon-promoting account.
The video was later found to have been created using AI, yet that didn’t stop it from garnering over 4.3 million views before it was deleted. While the campaign to attack Walz predates the publication of the deepfake video, the video caused the story to go viral.
3. Germany’s anti-immigration and populist Alternative for Germany party uses AI-imagery to win votes
Across Europe, far-right parties and activists have increasingly utilized AI-generated content to advance anti-immigrant and xenophobic agendas.
In Germany, members of the Alternative for Germany (AfD) party posted AI-generated content online to push their anti-immigration stance and influence voters ahead of the country’s election on February 23rd, 2025.
Far-right former chancellor candidate Alice Weidel shared an AI-generated video that presented an “idyllic” future without immigration, showing a Germany filled with white, blonde, and blue-eyed people, vs a “dystopian” future ruled by mass immigration, in which Angela Merkel takes a selfie with a person with darker hair and a darker skin tone.
There have even been reports of AI-generated songs spreading anti-immigration rhetoric with lyrics such as, “now it’s time to go, we’ll deport you all”, which party members have come under fire for singing.
Weißt Du noch, wie schön Deutschland einmal war? Und glaubst Du immer noch, dass ausgerechnet die CDU all die Probleme löst, die sie selbst verursacht hat? Deshalb am 23. Februar #AfD! pic.twitter.com/gE4n16OvVH
— Alice Weidel (@Alice_Weidel) January 5, 2025
In the February 2025 election, the AfD obtained 20.8% of the vote, coming second after the CDU/CSU. The far-right party doubled its share and achieved its best-ever result in nationwide German elections.
4. Deepfake voice recording reveals Slovakia election rigging
Just 48 hours before polls opened for Slovakia’s September 30th, 2023 election, a deepfake audio recording of candidate Michal Šimečka, who leads the liberal Progressive Slovakia party, and Monika Tódová, from the daily newspaper Denník N, discussing how to rig the election was posted to Facebook. In the recording, the two voices talk about buying votes from the country’s marginalized Roma minority.
Although Šimečka and Denník N quickly dismissed the audio as fake, the clip was shared during Slovakia’s 48-hour pre-election moratorium, a period when media and political figures are required to remain silent.
As a result, election laws made it difficult to publicly debunk the post. Additionally, because it was an audio recording —not a video— it slipped through a loophole in Meta’s manipulated media policy, which only prohibits manipulated videos where someone appears to say something they never actually said.
Šimečka’s liberal Progressive Slovakia party lost the election to the left-wing populist and nationalist party SMER.
5. Deepfake UK Prime Minister Rishi Sunak
In January 2024, The Guardian reported that more than 100 deepfake video advertisements impersonating former UK primer minister, Rishi Sunak, had been paid to be promoted on Facebook over the month prior, ahead of the July 2024 General Election. One of the videos, which claimed Sunak would require 18-year-olds to be sent to active war zones in Gaza and Ukraine as part of their national service, garnered more than 400,000 views.
More than £12,929 (more than $17,000) was spent on 143 adverts, originating from 23 countries including the US, Turkey, Malaysia and the Philippines.
The research was carried out by Fenimore Harper, a communications company set up by Marcus Beard. Beard previously worked for the UK government to counter conspiracy theories during the Covid crisis.
“With the advent of cheap, easy-to-use voice and face cloning, it takes very little knowledge and expertise to use a person’s likeness for malicious purposes.” – Marcus Beard, The Guardian
How Deepfakes Threaten Identity Verification Processes
Beyond spreading misinformation, fraudsters are increasingly using deepfakes to impersonate real people and manipulate biometric identity verification systems, often to gain unauthorized access to services or data.
This even done via a presentation attack, when a fraudster physically presents a fake biometric such as a deepfake video, or an injection attack, when the attacker bypasses the camera entirely by injecting a fake video or image file directly into the verification software using an emulator or virtual camera.
With open-source tools and generative AI, high-quality deepfakes can be created cheaply and easily, reducing the cost and effort needed to commit identity fraud. This has lead to increase in deepfake activity.
In our 2025 customer survey, we found that 96.4% of fintech professionals consider deepfake and synthetic identity fraud to be a top-of-mind concern. Almost 30% reported that they had either occasionally or frequently encountered incidents of deepfake or synthetic identity fraud in the past year.
Biometric Identity Verification Built to Detect Deepfakes
Powered by advanced AI, Incode’s identity verification solutions evolve in real time to stay ahead of emerging threats, including the rapid rise of deepfake-driven fraud.
By analyzing texture inconsistencies, light reflections, motion patterns, and iris structure, Incode delivers advanced protection against deepfakes, verifying if the person in front of the camera is real and live in just 40 milliseconds.
Incode has trained over 35 proprietary ML models on a massive dataset of over 5.9 million annotated fraud samples. This enables Incode to proactively adapt to and outpace evolving fraud techniques, including those driven by generative AI.
Our solutions verify if the person in front of the camera is real and live:
- Over 99.6% accuracy in NIST’s 1:N Face Recognition Technology Evaluation (FRTE) and was ranked #1 among full-solution IDV providers.
- Certified iBeta Level 2 for spoof detection
Incode stops over 99.9% of spoof attacks across a wide range of vectors, including both injection and presentation attacks.
Looking to protect your business against hyperrealistic deepfakes?
Schedule a demo now to experience how Incode protects against multi-angle fraud using multi-layered protection.