The launch of widely accessible, user-friendly Generative AI (GenAI) platforms has made deepfake production easier, faster, cheaper and more accessible than ever before. This has ushered in an era of disinformation – so believable that it could further undermine the foundations of civic life. Nowhere is this threat more acute than in the democratic arena, where deepfakes pose a direct challenge to elections, public trust, and the legitimacy of democratic institutions.
Platforms such as ChatGPT, Grok, Synthesia, Vall-E and thousands of other applications have significantly lowered the technical barriers that once restricted deepfake creation to those with advanced skills and resources. As the capabilities of GenAI continue to advance, the line between fact and fiction – already blurred in an era of contested truth – will become more obscured.
Indeed, the introduction of OpenAI’s Sora in October 2025 illustrates how AI-generated content is becoming increasingly difficult to detect as fake. Earlier, GenAI applications, though hyper-realistic, often contained tell-tale signs that betrayed their inauthentic nature. Sora’s deepfakes, in contrast, are “extremely real,” as noted by The New York Times (2025).
What this level of believability of deepfakes could mean for elections slated in 2026 and beyond remains to be seen.
To better understand this new reality of GenAI and its impact on elections, we examine the lead-up to the 2024 “super election year,” as the biggest election year in human history, when half of the world’s population – 3.7 billion people – across 72 countries went to the polls.
The global electoral cycle in 2024 was described by Aspen Digital as “the first AI election” (Schiller & Harbath, 2025) due to its taking place in tandem with what was predicted to be a proliferation of “supercharged” AI-generated content (Freedom House, 2023).
Governments, policymakers, academics, and experts nearlyunanimously agree that a massive wave of false information is approaching (Alanazi et al., 2025). The World Economic Forum declared disinformation and misinformation the primary immediate global risk for 2024, based on interviews with nearly 1,500 global leaders from academia, business, government, the international community and civil society.
Anticipating a surge in synthetic content, major platforms introduced policies, detection systems, and labelling initiatives to demonstrate readiness.
As the 2024 election year drew to a close, it was clear that the anticipated surge of deepfake disinformation had not materialised as anticipated. Evidence indicated that GenAI was predominantly used for entertainment, satire, and efficiency improvements rather than as a primary tool for widespread voter manipulation. Traditional mediums of disinformation, including societal elites and the mainstream media, had a greater impact (Simon & Altay, 2025). This was certainly the case in some countries in the Global South, where cheapfakes, not deepfakes, still dominated the information landscape.
This lesser impact, however, does not imply that the threat no longer exists. Rather, it emphasises the need to increase efforts to proactively prepare for and meet challenges posed by emerging technological threats to democracy. It also emphasises the need to continue addressing the root causes of information disorders, instead of shifting the focus to new mediums.
To that end, this paper pursues three aims. First, it traces the evolution of deepfake technology and its role in reshaping the global disinformation landscape.Second, it explores existing empirical research on the effects of political deepfake disinformation to build a solid evidentiary foundation for civil society to assess the real-world impacts of deepfakes. The goal is to develop aknowledge and evidencebase that captures how deepfakes are influencing elections worldwide, with a particular focus on the Global South.
Third, it analyses the evidence-based scale of deepfake electoral disinformation between 2024 and 2025, up to the writing of this report. Four country case studies – Namibia, Ecuador, Singapore and Germany – are examined as each represents a distinct social, political, and economic digital ecosystem with unique challenges from which we can extract lessons that can strengthen future democratic resilience.
In addition, we introduce a comparative matrix framework for analysing deepfake disinformation. The matrix organises the case studies around key contextual and impact variables, enabling systematic comparison across digital ecosystems and socio-cultural and political contexts, particularly in the Global South. The framework enables a country-specific analysis within the matrix, allowing researchers to pinpoint issues that require targeted intervention. Its purpose is proactive: to anticipate and address emerging risks. Used reactively, it can also identify precisely where and how problems manifested.
Lastly, we present an advocacy strategy with a set of recommendations for civil society partners. To avoid the self-defeating narrative that exaggerates AI’s threat to elections, we also highlight AI’s potential to strengthen civil society and civic space worldwide with a list of use cases.
NOTE: For shorthand, “disinformation” is used in this paper to refer to both disinformation and its byproduct: misinformation.