FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

GLOSSARY

Algorithms

A fixed series of steps that a computer performs to solve a problem or complete a task. On social media platforms, algorithms compile and present content based on users’ engagement history and predicted interests, often influencing the spread of disinformation.

Algorithmic Bias

Systematic errors in algorithms that lead to unfair or skewed outcomes, such as prioritising certain content or demographics. Disinformation, or algorithmic bias, can amplify misleading narratives and marginalise accurate information, often reinforcing stereotypes or polarising content.

Algorithmic Transparency

The degree to which the operations, criteria, and decision-making processes of algorithms, particularly recommender algorithms, are openly disclosed and understandable to users and researchers. Lack of transparency on social media platforms fuels disinformation by obscuring how content is prioritised or amplified, hindering efforts to detect manipulation or bias.

Amplification

The process of increasing the reach or visibility of content, either organically (through shares, likes, and comments) or artificially (via bots, sock puppets, or astroturfing). Amplification can also occur independently of algorithms or through coordinated efforts to manipulate platform rankings.

Artificial Intelligence (AI)

Computer systems performing tasks that typically require human intelligence, such as learning or pattern recognition. In disinformation, AI generates convincing fake content (e.g., deepfakes, text, images) or helps detect manipulation campaigns.

Automation

Software tools designed to complete tasks with minimal human direction. In disinformation, automation amplifies misleading narratives through bots or coordinated campaigns.

Cognitive Biases

Unconscious thinking patterns that influence how people interpret information. Disinformation exploits these unconscious thinking patterns to increase the likelihood that individuals will accept or share false narratives.

Content Moderation

The process of detecting and addressing content that violates the terms of use utilises both automation and human review. Actions include demonetisation, downgrading or removal. Disinformation persists due to inconsistent moderation, particularly in non-English languages.

Content Removal

A moderation decision to delete content violating a platform’s Terms of Service. Enforcement varies across languages and regions, raising concerns about transparency and consistency.

Coordinated Inauthentic Behaviour (CIB)

Networks of accounts secretly work together to sway online narratives by employing strategies such as identical posts and coordinated timing., which iscentral to many influence operations.

Data Access

The ability to retrieve digital information from platforms, often via APIs or scraping, is crucial for disinformation research. Platform policies are increasingly restricting this tool for disinformation research.

Data Mining

The process of discovering patterns in large social media datasets to detect coordinated behaviours, influential accounts or the spread of narratives that allows data collection (e.g., scraping) to transform raw data into insights.

Debunking

Exposing and correcting false claims through fact-checking, investigations or exposés, and which aims to counter disinformation and misinformation.

Deepfakes

Synthetic multimedia content that convincingly mimics real people or events, typically created to deceive. It is now increasingly produced using accessible, user-friendly Generative AI platforms such as ChatGPT, Grok, and others. Deepfakes enable malicious actors to craft realistic fake videos, audio, or images for disinformation campaigns, such as impersonating public figures or spreading false narratives.

Deep Learning

A subfield of AI that enables computers to learn from data and improve performance without being explicitly programmed. Machine learning systems identify patterns in existing data and apply that knowledge to make predictions, classifications or generate outputs when exposed to new information.

Digital Democracy

The use of digital technologies to support democratic participation, transparency, and accountability. It encompasses online civic engagement, access to information, and the ability to express political views in safe and inclusive digital spaces without fear of retribution. Digital democracy depends on the integrity of the online information environment. When polluted by disinformation, misinformation, hate speech or manipulation, these spaces can become tools for exclusion, polarisation, democratic erosion and undermine electoral integrity.

Digital Literacy

This is the capacity to evaluate and interact with digital content critically, identify information disorders, and safeguard online privacy.

Digital Resilience

The capacity of individuals or societies to withstand and adapt to digital threats, including disinformation, surveillance, censorship and others.

Digital Rights

Online freedoms and protections include privacy, freedom of expression, and access to information. It encompasses safeguards against censorship, surveillance, and other online harms.

Disinformation

False information deliberately created or spread to cause harm, often for political, financial or social motives.

Electoral Disinformation

False or misleading information deliberately spread to influence elections, undermine electoral integrity or manipulate voter behaviour, thereby threatening digital democracy. It includes tactics like voter suppression, disinformation, election denialism, microtargeted disinformation, fake news, deepfakes or narrative hijacking to sow distrust, polarise voters or discredit candidates.

Electoral Integrity

The degree to which electoral processes are free, fair, and credible, as well as supported by transparent systems, impartial institutions, and an informed electorate. In digital spaces, electoral integrity relies on the integrity of digital democracy in the online information environment. Voters must be able to access accurate, trustworthy information without being misled by disinformation, misinformation, hate speech or foreign and domestic manipulation. A compromised digital environment can distort public perception, suppress participation, and undermine trust in electoral outcomes.

Fact-Checking

Verifying the accuracy of public statements or reports.

Fake News

Disinformation formatted to resemble authentic news, often as falsified articles or websites.

Foreign Information Manipulation and Interference (FIMI)

The deliberate actions undertaken by a foreign government or entity to exert influence over another country’s decision-making, policies or public opinion, often in ways that benefit the foreign actor’s interests. These operations can employ covert and overt methods, including disinformation campaigns, cyberattacks, and financial inducements, and are often designed to undermine democratic institutions, manipulate public discourse or advance a foreign government’s strategic objectives. (also known as Foreign Influence Operations).

Freedom of Speech

The right to express opinions without censorship or penalty. It can be misused to resist content moderation and disinformation can be used to supress targeted groups’ freedom of speech.

Gendered Disinformation

False or misleading content specifically designed to target women and gender non-conforming individuals, particularly those in public, political or activist roles. It blends traditional disinformation tactics with gender-based abuse to silence, discredit or intimidate its targets. Typical techniques include the spread of misogynistic narratives; the sexualisation and manipulation of images or videos (including deepfakes); the reinforcement of harmful gender stereotypes; and coordinated harassment campaigns involving threats of violence, doxxing or cyberattacks. The goal is not only reputational harm but also to deter political participation and limit visibility in public discourse.

Generative AI

AI that creates new content (e.g., text, images, audio) used in disinformation to produce deepfakes or tailored propaganda. Examples include ChatGPT, DALL-E and Grok.

Hate Speech

Communication attacking individuals or groups based on protected attributes (e.g., race, gender). It often overlaps with disinformation to incite violence or silence voices.

Inferred Identities

A set of personal or social characteristics, such as race, gender, religion, location or political affiliation, deduced by algorithms or platforms based on a user’s online behaviour, interactions, and data patterns rather than voluntarily disclosed information. These algorithmically derived profiles are often used in microtargeting, including disinformation campaigns, to craft tailored content that exploits perceived vulnerabilities or group affiliations.

Information Disorders

An umbrella term for various forms of harmful or misleading content that distort truth and undermine public discourse. It includes disinformation, misinformation, malinformation, propaganda, conspiracy theories, clickbait, satire or parody shared as fact, hoaxes, trolling, imposter content, synthetic media and others.

Information Integrity

An ecosystem where accurate, reliable information is consistent and accessible and where freedom of expression is protected.

Information Overload

A state where excessive information volume overwhelms critical processing, enabling disinformation to spread unnoticed, especially during crises.

Information Vacuum

A lack of timely, accurate information, allowing disinformation, rumours or conspiracies to fill the gap, a common occurrence during crises or elections.

Influence Operations

Coordinated efforts to manipulate public opinion or behaviour using deceptive tactics such as disinformation or fake accounts.

Influencers

Individuals with large social media followings who shape opinions or behaviour.

Influencer-for-Hire

An influencer paid to amplify specific narratives or discredit opponents, often covertly.

Inoculation Theory

The strategy, grounded in behavioural science, aims to foster psychological resilience against misinformation by presenting individuals with a toned-down version of deceptive content, accompanied by refutations or counterarguments. This “prebunking” approach equips individuals to recognise and reject future attempts at manipulation.

Internet Shutdowns

Intentional interruptions of internet connectivity intended to regulate the flow of information, often intensifying misinformation by restricting access to accurate information.

Labelling

Content moderation practice of applying informational labels to posts or accounts to provide context (e.g., marking disinformation or sensitive content).

Large Language Models (LLMs)

AI trained on vast text data to generate human-like language. Examples include ChatGPT, Grok, and LLaMA.

Linguistic Disparity in Moderation

The inconsistent or inadequate moderation of harmful content, such as disinformation or hate speech, in non-English languages due to limited automated systems or human oversight. Malicious actors exploit this gap by using native languages or word camouflage to bypass detection, particularly in electoral disinformation campaigns targeting diverse linguistic communities.

Malign Actors

Individuals, groups or commercial entities intentionally spreading disinformation or manipulating information ecosystems.

Malinformation

Truthful information shared to cause harm, often by revealing private data or using facts out of context (e.g., doxxing).

Machine Learning (ML)

A subfield of artificial AI that enables computers to learn from data and improve performance without being explicitly programmed. Machine learning systems identify patterns in existing data and apply that knowledge to make predictions, classifications or generate outputs when exposed to new information.

Manufactured Amplification

Deliberate boosting of content visibility through deceptive means (e.g., bots, sockpuppets) to distort perceived popularity or credibility.

Media Literacy

Competencies to critically engage with media, as well as assess source credibility and truthfulness.

Microtargeting

The practice of sending highly tailored content or ads to small, specific groups based on personal characteristics and beliefs.

Misinformation

False information spread without intent to mislead and is believed to be true by sharers.

Natural Language Processing (NLP)

A field of AI that enables computers to understand, interpret, and generate human language. In disinformation research, NLP is used to analyse large volumes of social media content, detect harmful narratives, identify emotional tones and automate the recognition of patterns of misinformation and disinformation.

Online Violent Extremism

Using digital platforms to promote or incite ideologically motivated violence, often through echo chambers and algorithmic reinforcement.

Open-Source Intelligence (OSINT)

The practice of collecting, analysing, and interpreting publicly available information from digital, print, and broadcast sources to generate actionable insights. In the context of disinformation, OSINT leverages social media platforms, news outlets, websites, forums and multimedia content to detect coordinated manipulation, trace the origins of false narratives, and identify threat actors. OSINT is a foundational method in digital investigations, election monitoring, and media forensics, valued for its transparency, verifiability, and ethical alignment when conducted responsibly.

Prebunking

Anticipating and countering disinformation before it spreads, using past fact-checks to prepare responses.

Propaganda

A form of strategic political communication aimed at influencing public opinion or behaviour in support of a political, ideological or institutional agenda. Propaganda typically involves the selective use of facts, emotional appeals, repetition and symbolic messaging to persuade and mobilise audiences. While not always false or harmful, propaganda can become problematic when it distorts reality, suppresses dissent, or legitimises authoritarianism. In electoral contexts, it is distinct from disinformation, although the two may intersect.

Psychographic Profiling Data

Information that categorises individuals based on psychological attributes such as values, beliefs, interests, attitudes, lifestyles and personality traits. This data is often derived from online behaviour, including social media activity, likes, shares, and browsing habits, and is used to predict and influence decision-making, especially in targeted advertising and political microtargeting campaigns.

Recommender Algorithm

An automated system used by social media platforms to select, rank, and present content based on user behaviour, interests, and engagement signals. Powered by machine learning, this algorithm prioritises attention-grabbing content, often amplifying disinformation by creating feedback loops that reinforce cognitive biases, such as confirmation bias, and entrench user beliefs.

Regulatory Responses

Government or institutional policies and laws aimed at combating disinformation, hate speech, or platform manipulation. Examples include content moderation mandates, transparency requirements for algorithms, or penalties for spreading false information. These efforts aim to enhance information integrity but face challenges in enforcement and striking a balance with free speech.

Social Media Data

Publicly available content and metadata from platforms are used to detect disinformation trends or campaigns.

Social Media Digital Forensics

The specialised process of collecting, preserving, and analysing social media data to uncover evidence of harmful activities, such as disinformation, cyberbullying or hate speech, often perpetrated by anonymous accounts. Techniques include metadata analysis, linguistic profiling, network mapping and reverse image searches to trace origins, identify hidden networks or attribute content to malicious actors despite anonymity. This field is critical for exposing coordinated manipulation and ensuring admissible evidence for legal or public accountability.

Social Media Metrics

The analysis of social media data to provide a quantitative measurement of a topic; for example, analysing the conversation volume on a specific topic and comparing that against other topics.

Social Media Monitoring

The real-time tracking and recording of social media activity, such as mentions, hashtags, or keywords, to observe engagement, flag incidents, and identify disinformation as it spreads.

Social Media Listening

The process of tracking and analysing online conversations to understand public sentiment, detect emerging trends, and uncover disinformation patterns. Unlike social media monitoring, which focuses on observing and recording activity, social listening interprets meaning and context.

Synthetic Media

AI-generated or manipulated content used to create convincing disinformation.

Targeted Harassment

Coordinated online attacks to threaten or silence individuals, often overlapping with disinformation or hate speech.

Technology-Facilitated Gender-Based Violence (TFGBV)

Gender-based harm via digital platforms, including harassment, doxxing or gendered disinformation targeting women or gender-diverse individuals.

Trolling

Inflammatory online behaviour to provoke negative reactions, often used in disinformation to distract or polarise.

Troll Farm

A group engaging in coordinated trolling or bot-like narrative promotion, also called a troll army.

User-Generated Content (UGC)

Any form of content created and voluntarily shared by individual users on digital platforms, rather than by the platforms themselves, professional media or paid content producers. It stands in contrast to coordinated content produced by content farms, bot farms or commercial disinformation operators.

Web Scraping

Extracting data from websites without APIs; used in disinformation research but may violate platform terms of service.