FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

EXECUTIVE SUMMARY

This paper examines how Generative AI (GenAI) is reshaping election-related disinformation and assesses how civil society can future-proof democratic processes against the next wave of manipulative technologies. While deepfakes have dominated global headlines as an existential threat to democracy, the evidence from the 2024–2025 “super election cycle,” during which nearly half of the world’s population voted, reveals a more complex reality. The much-feared “deepfake election” did not materialise; however, the convergence of cheapfakes, synthetic media, and algorithmic amplification continues to erode public trust in elections, journalism, and institutions.

Deepfakes – AI-generated video, audio, or images that mimic real people with a high degree of believability – represent a distinct medium of deception within the broader information disorder ecosystem. They differ from “cheapfakes” that rely on low-tech manipulation such as splicing or mislabelling. Deepfakes exploit the realism heuristic, namely the human tendency to place greater trust in visual over text-based information. Studies show that they reinforce pre-existing cognitive biases, increase uncertainty, and trigger lasting memory distortions, even after being debunked. Across the Global South, the most common threats remain cheapfakes, narrative manipulation, and coordinated inauthentic behaviour, which exploit a low level of digital literacy, linguistic inequities in moderating content, and polarised information ecosystems.

This report, produced by the Digital Democracy Initiative at CIVICUS, introduces a Deepfake Risk Matrix – a conceptual framework for assessing national vulnerabilities across five domains: political-institutional, social, economic, digital ecosystem and actor behaviour.

Using case studies from Namibia, Ecuador, Singapore and Germany, we find that the impact of deepfakes is less determined by their technical sophistication than by contextual factors such as media freedom, trust in electoral institutions, and the strength of civil society, as illustrated by the following points:

  • In Namibia, cheapfakes targeting President Netumbo Nandi-Ndaitwah employed gendered disinformation to question her fitness for office, showing how manipulated media intersects with entrenched social biases.

  • In Ecuador, AI-generated content formed part of a broader hybrid information war, amplified by violence, corruption, and bot-driven polarisation.

  • In Singapore, deepfake regulation is among the world’s most advanced, yet a high degree of state control limits open debate and can blur the line between regulating disinformation and censorship.

  • In Germany, a robust regulatory and fact-checking ecosystem under the EU Digital Services Act and AI Act mitigated synthetic media risks, demonstrating the value of pre-emptive legal and civic action.

Among these cases, civil society organisations (CSOs) emerged as a crucial line of defence. In 2024 and 2025, CSOs globally expanded their monitoring, partnering with fact-checkers and launching deepfake literacy campaigns. Examples include BOOM Live’s Deepfake Tracker in India, Mafindo’s synthetic media monitoring in Indonesia, and Witness’s pre-bunking initiatives in the United States. These initiatives collectively strengthened public resilience and pressured platforms to enhance transparency. However, disparities remain, especially among CSOs in the Global South which face limited resources, restricted data access, and potential political threats.

The report argues that regulatory readiness is uneven across the globe. Existing frameworks focus on reactive takedowns or content labelling rather than structural transparency or accountability. Many countries in the Global South lack specific legislation on synthetic media, while those that do often risk criminalising freedom of expression. A human-rights-based approach to deepfake governance must, therefore, mitigate potential harm, while at the same time protecting free expression, creativity, and civic participation.

A key insight is that the deepfake problem is not technological in nature but systemic. It reflects long-standing weaknesses in media ecosystems, the commodification of public attention, and the political economy of content-generating platforms. Mitigating harm requires embedding human rights and accountability principles at every stage of the technology lifecycle – from data collection and model training – to deployment and platform moderation.

The advocacy strategy and recommendations (Part 5 of this report) call for a coordinated global response centred on:

  • Mandatory algorithmic and data transparency from tech platforms.

  • Human impact assessments and independent ethics boards for AI developers.

  • Balanced, rights-based regulation of synthetic media that avoids overreach.

  • Investment in digital literacy and prebunking campaigns, especially for women, youth, and marginalised communities who areoften targeted with disinformation.

  • Stronger South–South cooperation and funding for CSO-led monitoring and response networks.

Ultimately, the report concludes that deepfake disinformation should be viewed as a symptom of deeper structural inequalities in the global information space. Technological defences alone will not suffice. The durability of democracy will depend on investing in informed, resilient, and empowered audiences and media consumers. Preparation – rooted in evidence, ethics, and equity – remains a powerful defence of democratic principles and human rights values.