FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

PART 2: LITERATURE REVIEW: EMPIRICAL EVIDENCE ON THE IMPACT OF DEEPFAKES

Much of the popular understanding of the risks of deepfake political disinformation and its impact on electoral integrity rests on speculative, alarmist perspectives. This results from the inherent flaws of speculation as a thought process and from assumptions based on broad generalisations that assume uniform technology acceptance, use, and effects across the globe.

Fortunately, the last three years have, seen a surge in empirical assessments of the risks posed by deepfakes. Researchers and academics have begun deploying experimental designs to measure how deepfakes influence voter perceptions, trust in democratic processes, and the overall information ecosystem. This includes assessing how cognitive, emotional, and behavioural processes influence the believability and spread of deepfakes.

The need to carefully examine the psychosocial dynamics behind the spread of political disinformation, across all its delivery methods, cannot be overstated. As a relatively new and potentially potent method of content delivery, analysing the impact of deepfakes must be grounded in empirical research to support evidence-based mitigation and advocacy strategies.

As researchers at Purdue University note, much of the existing academic literature on deepfakes has focused on their perceived credibility, examining whether viewers find them convincing, whether they alter beliefs or attitudes, and whether they can be detected in idealised laboratory settings using researcher-generated content. However, with the growing integration of GenAI into politics, there is an urgent need to expand inquiry into the real-world impact of political deepfakes (Walker, Schiff, & Schiff, 2024).

As a global civil society alliance, CIVICUS is leveraging the strength of its diverse membership to help civil society organisations respond to emerging digital challenges. Through this paper, we contribute to the growing body of empirical research on the impact of deepfakes on electoral integrity.

The insights offered here are intended to ground this empirical analysis, enabling civil society partners to test theoretical assumptions against real-world impact rather than speculative, fear-based narratives. By combining an overview of academic research with case studies from four countries and other examples, this paper aims to catalyse a research agenda that strengthens advocacy efforts, culminating in the recommendations outlined in the accompanying Advocacy Brief.

Prior to the literature review, we willclarify some conceptual issues to start.

Not all deepfakes are intended to harm or are harmful. Deepfakes can and have been disseminated for legitimate political and social commentary in the form of memes, satire, and parody. Indeed, much of deepfake usage observed during the 2024 electoral cycle served as satire, educational purposes or political commentary, according to researchers (Nathan & Sanders, 2024).

Disinformation, however, is distinguished by its core feature, namely the intent to deceive. Therefore, a deepfake ceases to be simple commentary or satire if it is intended to deceive.

There has also been a tendency to describe the now broad access to GenAI platforms as the “democratisation” of AI. We caution against the use of this phrase. Access to technological tools alone does not make technology democratic. As highlighted by the Centre for the Governance of AI, Harvard University’s Berkman Klein Centre, and the Centre of Governance of AI, r the concept of “democratisation” masks deeper questions of power, primarily who owns the infrastructure, who sets the rules, and who benefits? In all instances, the response to these three questions of power is the Global North (Seger et al., 2023), to quote:

AI democratisation is a multifarious and sometimes conflicting concept that should not be conflated with improving AI accessibility. If we want to move beyond ambiguous commitments to “democratising AI” to productive discussions of concrete policies and trade-offs, then we need to recognise the principal role of the democratisation of AI governance in navigating tradeoffs and risks across decisions around use, development, and profits.

CIVICUS continues to address the Global South–North digital divide as a key challenge affecting civic space and digital democracy. Our focus remains on strengthening digital resilience and access for civil society in the Global South. We advocate for genuine digital inclusion, which means ensuring access to technology, training, and tools that reduce inequality and empower participation. We also champion the use of digital technologies to expand democratic engagement and global solidarity, while protecting vulnerable communities from online repression.

2.1 What are the Psychological Effects of Deepfake Consumption?

Understanding the impact of disinformation requires understanding how the human brain processes information and forms beliefs. Cognitive biases have long been established as central to the spread and believability of disinformation, shaping how individuals interpret and accept information. And the interaction between deepfakes and cognitive biases is an area of emerging study.

HM Murtuza and MD Oliullah (2025) explored research on the impacts of political deepfakes on cognitive processing. While finding a Western dominated approach in many studies, they found evidence indicating that political deepfakes can trigger a wide range of psychological effects, including deception, uncertainty, loss of trust and shifting attitudes, thus reinforcing existing biases. They further include the vital proviso that belief formation is heavily dependent not only on the content itself but also on individual traits, political leanings, and the wider social and political environment. Deepfakes do not affect everyone in the same way. They interact with existing beliefs and predispositions, producing complex and varied outcomes.

In their study, Weikmann, Greber, and Nikolaou (2024) concluded that exposure to deepfakes has far-reaching consequences beyond simple deception, asserting that “people [could start to] no longer believe in what they see.” Deepfakes affect both the perceived credibility of information and an individual’s confidence in identifying falsehoods. In high-choice media environments, this uncertainty can be especially damaging, as it could amplify scepticism towards journalism and politics.

Effects on memory have been equally substantiated. A Massachusetts Institute of Technology study confirmed that AI-generated content can distort memory and perception (Pataranutaporn et al., 2024). Participants exposed to AI-altered images were significantly more likely to report false memories, and this effect intensified with AI-generated videos, where confidence in these fabricated recollections was even stronger. The content not only misinforms in the present but also reshapes how the past is remembered.

Therefore, there appears to be scholarly agreement that deepfakes have deceptive power and can have the following consequences:

  • Trigger a wide range of psychological effects, including deception, uncertainty, loss of trust and shifting attitudes, thus reinforcing existing cognitive biases.

  • Affect both the perceived credibility of information and an individual’s confidence in identifying falsehoods.

  • Distort memory and the perception of events, not only in the present, but also in the past. This effect is more pronounced with deepfake videos.

  • Its impact, however, is heavily dependent not only on the content itself but also on individual traits, political leanings, and the wider social and political environment.

2.2 Are Deepfakes more Persuasive than other Types of Disinformation?

Opinions vary over how pursuasive deepfakes are compared to other types of disinformation.

Ching et al. (2024) conducted a scoping review of existing empirical studies that have investigated the effects of viewing deepfakes on people’s beliefs, memories, and behaviours. They found evidence suggesting that exposure to deepfakes can influence opinions about public figures, increase the believability of misinformation, and create false memories. However, it remained unclear whether deepfakes are more manipulative than other forms of misinformation.

HM Murtuza and MD Oliullah (2025) noted that while political deepfakes can lower trust in online news and increase distrust of the government, research offers mixed results on whether deepfakes are more credible or persuasive than other forms of misinformation. Sharing deepfakes depends on cognitive ability, confirmation bias, and social dynamics. In other words, deepfake content is not inherently more credible or persuasive than other forms of misinformation.

Sundar, Malino, and Cho (2021) arrived at the same conclusion, theorising that video can make disinformation seem more credible than audio or text, leading to greater possibility of sharing the content. The effect is stronger among users with less interest or knowledge of the topic andwho are more likely to believe fake stories when presented in video format. The level of perceived realism increases both credibility and the intention to share the content on WhatsApp, for example.

These differing opinions highlight the need for more extensive and continuous research to fully understand the impact of deepfakes on information integrity and how they are reshaping media consumption habits.

In the research, there is a well-studied concept with broader agreement: the liar’s dividend and deepfakes.

The phrase “liar’s dividend” was coined by two legal scholars, Danielle Citron and Robert Chesney (2019), to describe a new social construct: how disinformation serves as a convenient and powerful rhetorical device for malicious actors, in this case, politicians. It allows them to plausibly dismiss real evidence as “misinformation” or “fake news.” This is seen with deepfakes in information ecosystems, where the concept of truth is contested and it is hard to distinguish fact from fiction. Chesney and Citron explain:

Ironically, liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes. Imagine a situation in which an accusation is supported by genuine video or audio evidence. As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes. Put simply: a sceptical public will be primed to doubt the authenticity of real audio and video evidence. This scepticism can be invoked just as well against authentic as against adulterated content. Hence, what we call the liar’s dividend: this dividend flows, perversely, in proportion to success in educating the public about the dangers of deep fakes. The liar’s dividend would run with the grain of larger trends involving truth scepticism.

2.3 Are Deepfakes more Convincing than Cheapfakes?

Hameleers (2024) conducted two experiments to compare the effects of deepfakes and cheapfakes on the perceived credibility of political disinformation. Using a between-subjects design, Dutch participants were shown manipulated videos of a conservative Dutch politician, falsely portrayed as delivering a radical, anti-immigration speech. The fabricated speech included statements such as “immigrants are responsible for most of our country’s problems” and that people from “backward and retarded societies commit violent crimes.” These claims were false and inconsistent with both the politician’s real views and empirical crime data. The researchers deliberately constructed the video as disinformation.

In this particular experiment, deepfakes were not rated as more believable and credible than cheapfakes. That is a) on the specific topic of immigration, and b) in Dutch society – “on average, deepfakes are rated as less credible and believable than cheapfakes.”

One could be tempted to generalise these findings and declare that cheapfakes are more potent than deepfakes. That is not so. The greater point in Hameleer’s study is that less sophisticated modes of deception, such as cheapfakes, can be as credible as more sophisticated deepfakes. The power of both forms depends heavily on context.

This, however, does not discount the possibility that as deepfakes become more realistic and lack the tell-tale signs of falsity, cheapfakes will become more believable, even to disinformation-literate audiences.

Does this make deepfakes more powerful? Yes and no. It depends on the topic, the context, and the audience. Disinformation must always be analysed within context. We explore this issue regarding context when answering the next question.

2.4 How are Societal Norms and Cultural Values Influencing the Spread of Deepfakes?

To answer this question, researchers consulted 14 accomplished experts, each selected for their rich background and expertise across various disciplines (Alanazi et al., 2025). The experts highlighted the need to consider how factors such as region, personality, and social media use may affect attitudes towards technology and authenticity, leading to varied perceptions and impacts of deepfakes across the globe.

This is a crucial point. Social context cannot be detached from how disinformation is interpreted and shared.For example, the interplay between local customs, communal expectations, and ingrained values determines how disinformation resonates across populations, influencing not only those who are persuaded by it but also the motivations for sharing or rejecting the content.

To illustrate, African scholars studied the factors influencing the believability and dissemination of misinformation in six Sub-Saharan African countries (Madrid-Morales et al., 2021). The research revealed that a country’s political culture and media system may affect how users interact with false information. For example, sharing political information, including misinformation, was an act of courage in Zimbabwe, a country with limited press freedom and ongoing authoritarian rule. In contrast, South Africa, which has an active media sector and a functioning democratic system, exhibits lower levels of motivation among consumers to share political news.

A different dynamic is present in some Asian countries. A report by the European Centre for Populism Studies (Yilmaz et al., 2022) of five Asian countries explains how governments in those countries often use religious values to justify digital authoritarianism, with little public backlash. Censorship, surveillance, and internet shutdowns are often justified as measures to safeguard religious and cultural values, with disinformation frequently cited as a rationale for restrictions. The appeal to shared religious and cultural values can contribute to less opposition to such restrictions that would otherwise be considered authoritarian.

Sabhanaz Rashid Diya (2024), a computational social scientist and Executive Director of the Tech Global Institute, a policy lab working at the intersection of private technology companies, civil society, and government to reduce equity gaps in the Global South, explains that in the “Global Majority,” voters are far more exposed to cheapfakes than to advanced deepfakes. This distorts civic discourse, discredits candidates, and worsens misinformation in fragile or emerging democracies with low digital literacy and limited press freedom. Their ease of production makes their impact potentially larger in scale. However, cheapfakes have been a blind spot, as major platforms focus their manipulated-media policies mainly on deepfakes, neglecting the far more widespread cheapfakes. Diya suggests the adoption of more technology-agnostic frameworks that focus on harm, not just on the level of sophistication of the manipulation. We agree.

2.5 Key Insights from the Literature Review
TopicKey Points
Cognitive and psychological effects
Deception, uncertainty, loss of trust, shifts in attitude, reinforce existing biases, distort memory and perception, recall false events with high confidence; effects mediated by individual traits and socio-political context; cultural norms and media freedom influence spread and interpretation
Persuasiveness vs. other misinformation
Deepfakes can influence beliefs; not consistently more persuasive than text/image misinformation; video realism heightens credibility and sharing intent among low-information audiences
Liar’s dividend
Key effect of deepfakes is the liar’s dividend as wrongdoers increasingly dismiss authentic evidence as fake; public awareness fuels truth; scepticism erodes trust in visual/audio media
Deepfakes vs. cheapfakes
Cheapfakes can be as believable as deepfakes; sophistication matters less than cognitive/contextual factors
Global North-South digital divide
Research is Global North-centric; assumes universal access/effects; CIVICUS positions research within Global South realities; cheapfakes dominate amid infrastructure, language, press-freedom gaps
Cheapfakes in the Global South
Dominant form of manipulated media; produced/shared more easily than deepfakes and equally harmful; distort civic discourse, discredit candidates, deepen misinformation/polarisation; danger lies in effect on trust, participation, and overall information environment
Topic Key Points
Cognitive and psychological effects
Deception, uncertainty, loss of trust, shifts in attitude, reinforce existing biases, distort memory and perception, recall false events with high confidence; effects mediated by individual traits and socio-political context; cultural norms and media freedom influence spread and interpretation
Persuasiveness vs. other misinformation
Deepfakes can influence beliefs; not consistently more persuasive than text/image misinformation; video realism heightens credibility and sharing intent among low-information audiences
Liar’s dividend
Key effect of deepfakes is the liar’s dividend as wrongdoers increasingly dismiss authentic evidence as fake; public awareness fuels truth; scepticism erodes trust in visual/audio media
Deepfakes vs. cheapfakes
Cheapfakes can be as believable as deepfakes; sophistication matters less than cognitive/contextual factors
Global North-South digital divide
Research is Global North-centric; assumes universal access/effects; CIVICUS positions research within Global South realities; cheapfakes dominate amid infrastructure, language, press-freedom gaps
Cheapfakes in the Global South
Dominant form of manipulated media; produced/shared more easily than deepfakes and equally harmful; distort civic discourse, discredit candidates, deepen misinformation/polarisation; danger lies in effect on trust, participation, and overall information environment