FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

PART 1: DEEPFAKES & ELECTORAL INTEGRITY

1.1 Deepfakes Explained

Deepfakes – also known as “synthetic media” – are videos, audio or images created or altered using machine learning algorithms. Machine learning, a subfield of AI, trains algorithms to identify patterns in data and apply that pattern recognition to make predictions or produce new outputs. As a subset of machine learning, Generative AI (GenAI) generates hyperrealistic images, audio or video. To produce text, GenAI models rely on Large Language Models (LLMs) to mimic human language and reasoning.

Textual representations of disinformation, even when AI-generated, are not considered deepfakes. That is because the phrase “deepfake” has as its origin in a combination of “deep learning” and “fake” and thus is primarily concerned with the use of deep learning technology to create hyperrealistic depictions of individuals saying and doing things that did not happen, i.e., visual realism and impersonation (Singh & Dhumane, 2025).

The phrase “deepfake” is often used as a catch-all term for all synthetic media, but there is a distinction. Falsified videos and images produced with basic editing tools such as Photoshop or Adobe software, or simple techniques such as cropping or splicing, are known as “cheapfakes” and have long been a part of the arsenal used by malicious actors.

Deepfakes, in their AI-generated nature and amplified by the rise of GenAI platforms, represent a new frontier of risk that requires specific focus.

The focus on deepfakes does not intend to diminish the dangers and risks posed by cheapfakes. Deceptive content undermines information integrity regardless of the technology used to create it.

1.2 The History of Deepfake Tech

The origins of deepfake tech can be traced to the 1990s, when researchers began experimenting with computer-generated imagery (CGI) to create lifelike representations of humans (Regan, 2024). Progress accelerated through the 2010s, driven by the emergence of larger datasets, improvements in deep learning, and access to greater computing power.

The introduction of Ian Goodfellow’s Generative Adversarial Networks (GANs) in 2014 marked a landmark moment in the use of deep learning to automate the generation and refinement of deepfakes. This breakthrough laid the foundation for the modern GenAI system. GANs provided the architecture to enable much more realistic synthetic content. Gone were the days of the painstaking manual effort that CGI required. However, despite these advancements, deepfake production still required substantial technical expertise, coding abilities, and specialised hardware, keeping it largely within the capabilities of researchers and niche online communities.

The ultimate game-changer was the introduction of user-friendly, easily accessible GenAI platforms such as ChatGPT, Gemini, Synthesia, and now thousands of other applications that have made the production of deepfakes easier, cheaper, and faster.

1.3 Electoral Disinformation and Electoral Integrity

Within an electoral context, disinformation is false or misleading information intentionally created and disseminated to manipulate public opinion, suppress voter turnout, discredit opponents, and/or distort democratic processes and outcomes. It becomes misinformation when those unaware of its intent to mislead share it, thereby amplifying its reach.

Election integrity, according to the International Federation of Electoral Systems (IFES) – a nonpartisan, nonprofit organisation advancing democracy and free and fair elections worldwide – requires “comprehensive, fair, and practicable legal frameworks; transparent and professional election administration; sound electoral operations and accurate election results; accessible and fair election dispute resolution mechanisms; inclusive and wide-ranging participation of voters and candidates; and professional and impartial media.”

Online electoral integrity requires information integrity: open information ecosystems where voters can access accurate, trustworthy information without being misled by disinformation, misinformation, hate speech or foreign malign influence operations.

The proliferation of electoral disinformation across the globe does not bear repeating. It continues to have far-reaching consequences that have eroded information and electoral integrity. For civil society, autocratic actors continue to deploy internet shutdowns, online censorship, and cybersecurity laws to restrict civic space around the world.

The harm caused by electoral disinformation is most evident in its different tactics. These are:

Type of DisinformationMain TacticTargetsKey Effects
Voter Suppression Disinformation
Spreading false information about voting dates, eligibility or safety at polling stations
Excluded groups, opposition voters
Intentional voter confusion, demobilisation, and undermining of electoral integrity and trust in democratic institutions
Election Denialism Disinformation
Falsely asserts elections are fraudulent or illegitimate
General public, election officials, and perceived political opponents
Erodes public confidence, delegitimises outcomes, fuels conspiracy thinking, encourages political radicalisation, incites threats or violence
Identity-Focused Disinformation
Exploits demographic, ethnic or religious differences through customised electoral disinformation
Targeted communities
Inflames prejudice, social fragmentation, and polarisation; provokes violence and harassment, and reduces political participation
Gendered Disinformation
Uses false or sexualised narratives to discredit, intimidate, and silence
Women, gender-nonconforming people
Reputational harm, psychological trauma deters women’s political participation, weakened democracy
Violent Extremism Disinformation
Public, opposition, dissenters
Public, opposition, dissenters
Disrupts elections, sparks division, legitimises violence, erodes trust, radicalises sentiment, suppresses dissent, paired with digital repression

It is within this volatile climate of manipulation, violence, and falsehoods that, for the first time, the tools to fabricate hyperrealistic deepfakes are now available to the public at virtually no cost.

OpenAI reported that, as of October 2025, ChatGPT had 800 million active users weekly, an unprecedented scale of access, with nearly 1 billion people using ChatGPT alone. That is an 8th of the global population. In early 2025, Gemini and Anthropic reported 300 million and 18.9 million active users, respectively.

It is essential to understand not only the “how” of disinformation but the “why”. At the core, disinformation campaigns are psychological warfare that weaponises existing social tensions and divisions to sow further division and deepen polarisation. They do so by targeting and exploiting human cognitive biases. In this way, false information achieves believability not through truth, but through psychological manipulation.

This negative effect is particularly potent in the context of electoral disinformation because politics cannot always be separated from emotion. It is through emotion that political attitudes and public opinion are formed, steering people towards certain types of content and influencing how it is interpreted through their inherent, and often deeply personal, cognitive biases. It is for this reason that disinformation is more prevalent, influential, and persistent on politically charged topics than on neutral or non-divisive ones (Zhou & Shen, 2024).

This is a principle that malicious actors are aware of and understand, namely thateffective disinformation hinges on emotional resonance, particularly through base emotions such as fear, anger, outrage, loyalty and hope. When messaging taps into these emotions, it becomes more compelling, more viral, and more likely to override analytical scrutiny. As explained by Claire Wardle and Hossein Derakhshan in their foundational paper on the study of information disorders (2017):

The most ‘successful’ of problematic content is that which plays on people’s emotions, encouraging feelings of superiority, anger or fear. That’s because these factors drive re-sharing among people who want to connect with their online communities and ‘tribes’. When most social platforms are engineered for people to publicly “perform” through likes, comments, or shares, it’s easy to understand why emotional content travels so quickly and widely, even as we see an explosion in fact-checking and debunking organisations.

What have been the impacts of deepfake disinformation since the launch of ChatGPT and similar platforms? This includes assessing how cognitive, emotional, and behavioural processes influence the believability of deepfakes. The need to carefully examine the psychosocial dynamics behind the spread of political disinformation, across all distribution methods, cannot be overstated.

Deepfakes cannot be understood solely through their technology. As an established principle, technological artifacts contain politics. Technology cannot be assessed solely for its contributions to efficiency and productivity, or for its positive and negative environmental side effects, but also for the ways in which it can embody specific forms of power and authority (Winner, 1980).

The impact of deepfakes is deeply contextual and depends on human factors such as adoption rates, the social norms and cultural values being disseminated, and the psychological mechanisms that govern how individuals interpret, accept, and share manipulated content.

It is at the intersection of technology, context, and human cognition where disinformation gains its persuasive power and spreads through societies. We will explore this in the next part of the paper.

As Access Now highlights, deepfakes should be analysed in a variety of contexts and not only within electoral periods:

For now, it isn’t clear what, if any, additional new risk of GenAI poses in the context of election disinformation that were not already present before generative AI came on the scene. In the meantime, while the world panics over the as-yet-unproven unique impact of generative AI on elections, the very real and distinct societal harms… we would be remiss not to mention the serious systemic risks posed by generative AI models used to create and disseminate non-consensual sexual imagery and child sexual abuse material, for instance, and the heightened online threats to the human rights, safety, and dignity of women, LGBTQ+ communities, and other racialized and marginalized groups.

Type of Disinformation Main Tactic Targets Key Effects
Voter Suppression Disinformation
Spreading false information about voting dates, eligibility or safety at polling stations
Excluded groups, opposition voters
Intentional voter confusion, demobilisation, and undermining of electoral integrity and trust in democratic institutions
Election Denialism Disinformation
Falsely asserts elections are fraudulent or illegitimate
General public, election officials, and perceived political opponents
Erodes public confidence, delegitimises outcomes, fuels conspiracy thinking, encourages political radicalisation, incites threats or violence
Identity-Focused Disinformation
Exploits demographic, ethnic or religious differences through customised electoral disinformation
Targeted communities
Inflames prejudice, social fragmentation, and polarisation; provokes violence and harassment, and reduces political participation
Gendered Disinformation
Uses false or sexualised narratives to discredit, intimidate, and silence
Women, gender-nonconforming people
Reputational harm, psychological trauma deters women’s political participation, weakened democracy
Violent Extremism Disinformation
Public, opposition, dissenters
Public, opposition, dissenters
Disrupts elections, sparks division, legitimises violence, erodes trust, radicalises sentiment, suppresses dissent, paired with digital repression