FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

PART 4: COUNTRY CASE STUDIES.

In this section, we provide a brief overview of each of the four country’s elections, as part of the case studies, accompanied by carefully and extensively contextualised social, cultural, economic and political data. This data depicts the conditions in which the elections took place and how these may have influenced the spread and prevalence of deepfake disinformation.

The intent is to unravel the vulnerabilities to and resilience factors against disinformation in general, and, where present, deepfakes. This isto encourage a focus on the root causes of disinformation and deepfakes rather than their occurrence.

Alongside the case studies is a proposed Deepfake Risk Matrix. The Matrix allows for more contextually grounded, evidence-based analysis of deepfake disinformation risks. It can facilitate a more nuanced understanding of vulnerabilities across diverse social, political, and technological contexts. The Matrix does this by clarifying vulnerabilities, indicators, and impacts across social, political, economic and regulatory factors as well as malign actors and technological domains. It assigns a risk level to guide appropriate strategic responses.

The Deepfake Risk Matrix can and should be tailored and contextualised.

Deepfake Risk Matrix

Factor Key Indicators Impact Metrics Risk Level Strategic Response
Social
  • Media trust and literacy levels
  • Polarisation
  • Digital comms vs. cultural norms (e.g., WhatsApp as the primary method of communication)
  • Erosion of political trust
  • Social fragmentation
Low–High
  • Media literacy campaigns
  • Civic outreach
  • Community rumour tracking
Economic
  • Internet penetration rate
  • Inequality (e.g., monetisation of disinformation)
  • Market manipulation
  • Resource diversion
Low –High
  • Require public labelling of paid-for posts
  • Platform accountability
Digital Ecosystem
  • Internet/mobile access
  • Platform dominance
  • Level of internet and network access
  • Legal restrictions on content or access to platforms
Low–Medium
  • AI detection tools
  • Platform accountability
Regulatory Context
  • Free and fair elections
  • Freedom of expression
  • Anti-deepfake legislation
  • Overreach vs. underreach
  • Civic trust in institutions
Low – High
  • Deepfake labeling
  • Platform liability frameworks
Actor Involvement
  • Foreign/state actors
  • Commercial disinfo
  • Political actors
  • Hate speech
  • Electoral interference
Low – High
  • Investigative journalism
  • OSINT
  • Real-time debunking: international cooperation
4.1 Namibia

The people of Namibia went to the polls on 27 November 2024 to elect a new President and members of the National Assembly. Namibia operates a proportional hybrid democratic model that blends proportional representation with direct and indirect elections, designed to reflect both national and regional interests.

Despite a relatively high score of 80/100, the CIVICUS Monitor ranks Namibia as having ‘narrowed’ civic space. Namibia’s 2024 elections, initially scheduled for 27 November but then postponed to 29–30 November due to logistical delays, resulted in a historic victory for the SWAPO movement, as Netumbo Nandi-Ndaitwah became the country’s first female President with 58.07% of the vote.

The pre-election environment was described by Good Governance Africa (2024) as “one of the most fiercely contested in history” as SWAPO, once a dominant liberation movement, faced unprecedented competition. In 2019, President Hage Geingob secured re-election with just 56% of the vote, a sharp decline from 87% in 2014. By 2024, the ruling party had to contend with new political forces.

The win for SWAPO was legally contested by the opposition Independent Patriots for Change (IPC) and its allies, which filed a lawsuit on 15 January 2025 alleging voter suppression, irregularities, technical failures and ballot shortages. While the election has become one of the most disputed in Namibia’s democratic history, the African Union Election Observation Mission reported that it was largely peaceful and conducted within the national legal framework.

The November 2024 vote took place amidst several labour actions and gender rights-related protests which addressed some of the prevailing socioeconomic issues in the country:

  • Contract workers for the City of Windhoek protested for permanent employment and better health benefits, but the City initially refused to recognise their strike or accept their petition.
  • The Namibian Economic Freedom Fighters demonstrated against a Rundu service station after a video surfaced showing employees being whipped in exchange for loans.
  • Activists and politicians were arrested during a planned Independence Day protest highlighting youth unemployment, with police citing national security concerns over the planned protest.
  • Gender activists marched to demand inclusive protections under the Domestic Violence Act, particularly for same-sex couples, and submitted a petition to the authorities.

Overall, Namibia retains a strong democratic culture. Freedom House’s Freedom in the World Report (2025) gives Namibia a score of 73/100, ranking it as “free” and recognising its multiparty democracy and respect for civil liberties. The Freedom House report notes the following for Namibia:

  • Free and fair elections: The electoral framework is robust and generally well implemented.
  • Political pluralism: Opposition parties may freely compete in elections and generally do not encounter intimidation or harassment.
  • Freedom of expression: Citizens are generally able to express political choices without undue influence from external actors.
  • Media freedom: The constitution guarantees freedom of the press and freedom of expression; journalists face few legal restrictions and typically work without risk to their personal safety.
  • Women’s political participation: Women are often discouraged from running for office, and few contested the November 2020 regional and local elections.
  • Anti-corruption enforcement: While Namibia has a sound legal framework, anti-corruption laws are inconsistently enforced.
  • Civil society: CSOs generally operate without interference, though government leaders occasionally use public platforms to criticise them.

Although a Freedom on the Net report was not compiled for Namibia, the country has consistently been recognised for its strong protection of media freedom and expression, often ranking among the top five countries in Africa. These protections extend to online spaces. According to DataReportal, Namibia’s internet penetration rate stands at 62%, with around 23% of citizens using social media, primarily Facebook and WhatsApp. However, a digital divide persists. Data costs remain prohibitively high and rural areas still experience limited connectivity.

Namibia currently lacks a specific law addressing disinformation. This area could be developed in the future, provided that any new regulation safeguards free expression and mitigates societal harms and gendered disinformation.

Public attitudes towards Namibia’s democracy remain robust, representing strong resilience factors against disinformation. According to Afrobarometer (2024), nine out of ten Namibians (90%) feel completely free to vote, and 74% report high levels of trust in the country’s election authorities. These indicators reflect enduring democratic resilience and electoral integrity, both of which serve as buffers against disinformation.

Trust in the media is also high; 75% of respondents believe journalists can report freely without government interference. This trust acts as another resilience factor against disinformation, as citizens still view the media as a credible source of verified and accurate information. By contrast, in countries such as the United States, where media trust is low, vulnerability to misinformation tends to be much higher as audiences turn to unreliable sources.

Namibia also benefits from a vibrant civil society sector, with numerous CSOs working on issues related to information integrity. These include Namibia Fact Check (fact-checking), the Editors’ Forum of Namibia (media freedom), the Namibia Media Trust (digital rights), the Action Namibia Coalition (access to information), and the Women’s Leadership Centre (technology-facilitated gender-based violence).

Deepfake Disinformation Report: Namibia

Namibia’s 2024 election period saw a marked rise in misinformation, particularly on WhatsApp and Facebook, according to the country’s Media Ombudsman. This was especially concerning given that nearly two-thirds of registered voters were young people who rely heavily on social media for political information (Media Ombudsman, 2025).

In partnership with the Institute for Public Policy Research (IPPR), Namibia Fact Check conducted a five-month misinformation monitoring project that revealed the following during the election period:

  • WhatsApp remained the primary channel for circulating false election-related content, while TikTok, used for the first time in a Namibian election, emerged as a significant new vector of communication.
  • Political parties, politicians, and their supporters were all complicit in spreading misinformation, primarily via social media and messaging platforms.
  • Coordinated disinformation campaigns were observed across multiple platforms.
  • Long-running smear campaigns targeted both ruling and opposition parties, their candidates, electoral authorities, and government institutions.
  • These campaigns often involved foreign actors and influencers, with narratives laundered through domestic and international online sources, including manipulated news coverage.
  • Namibian news outlets inadvertently amplified false information through superficial event-based reporting and insufficient fact-checking, while mistakes by regional and international media further eroded public trust.
  • Electoral authorities appeared ill-equipped to counter the volume and sophistication of online misinformation, including attacks on their credibility.

Reflecting a broader Global South trend, Namibia saw no verified deepfakes during the 2024 elections. However, cheapfakes circulated widely on social media and messaging apps throughout the year. According to Namibia Fact Check and the IPPR, malign actors increasingly experimented with AI-assisted and low-tech manipulations to advance smear campaigns and launder false narratives.

Several notable cheapfakes were detected, though most were low quality and had limited impact such as:

  • A video of former US President Joe Biden appearing to endorse Netumbo Nandi-Ndaitwah;
  • An audio clip of a candidate making vulgar, tribalist remarks; and
  • Various fabricated letters allegedly written by political candidates.

The most prominent cheapfake showed presidential candidate Nandi-Ndaitwah collapsing on stage during a rally. This incident formed part of a broader campaign of gendered disinformation, which fixated on her age and gender to question her fitness for leadership. Such narratives diverted attention from substantive political debate and sought to undermine public perceptions of her candidacy. Despite these efforts, she went on to win the presidency, becoming Namibia’s first female head of state.

AI-generated / Deepfake image depicting the ‘collapse’ of Namibian Presidential Election candidate, Ms. Netumbo Nandi-Ndaitwah started circulating online at the end of October 2024.

Applying the Deepfake Risk Matrix to Namibia’s Social Factors

Interview: Nambia

CIVICUS discussed the Namibian General Election with Frederico Links, Editor and Project Coordinator for Nambia Fact Check, a CSO and project of the Institute for Public Policy Research (IPPR) started in mid-2019 in response to the rise in political disinformation and propaganda in online political spaces.

We ran two projects concurrently ahead of, during, and after the 27 November 2024 parliamentary and presidential elections.

One was the coordination of a coalition of media and civil society organisations to counter election-related mis- and disinformation, supported by Africa Check with funding from the Google News Initiative. The project sought to help voters critically engage with information and make informed decisions in the voting booth. The coalition collaborated to fact-check politicians and political party claims, provide voters with reliable, nonpartisan information on key issues, and equip the public with the skills they need to identify election misinformation.

The other project we had running was one identifying, monitoring, and tracking election-related mis- and disinformation narratives, actors, and pathways. A report was produced and launched earlier this year capturing what we observed on the electoral information landscape. We saw that social media has become important for political communication and messaging, especially during electoral periods in Namibia. However, political actors (politicians and parties) often used social media to engage in negative campaigning by employing smear campaigns and spreading fake news to intimidate and incite. We also found that the actors associated with or supporting various political parties or causes were behind much of the election-related mis- and disinformation that was circulating across social media and messaging platforms before, during, and after the 27 November 2024 elections.

The emergent use of AI tools to generate content for online dissemination indicates that AI-generated political content, including disinformation content, will probably become a political and electoral headache down the line. While the AI-generated disinformation content observed on the Namibian electoral information landscape was still rather basic or crude and detectable, it seems clear that disinformation actors were experimenting and probably will become better and more sophisticated moving forward. This means that more sophisticated and hard-to-detect AI-generated content, including the use of deep-fakes, will increasingly appear on Namibian political and electoral information landscapes, probably sooner rather than later.

Social FactorAssessmentGendered Disinformation Risk
Media Trust & Literacy
Moderate trust in traditional media, but high reliance on social media, with different levels of digital literacy
Women and rural voters are more vulnerable to manipulated narratives due to limited media literacy.
Social Polarisation
Rising political tensions, especially between SWAPO and emerging opposition parties
Female candidates may be targeted with divisive deepfakes to exploit identity politics.
Cultural Norms & Gender Roles
Patriarchal norms persist, especially in rural areas; women are underrepresented in politics.
Deepfakes often weaponise gender stereotypes to discredit women’s leadership and credibility.
Minority Representation
The Ovambo majority dominates; the San and other minorities face systemic exclusion.
Minority women are doubly excluded and vulnerable to invisibilisation and targeted misinformation.
Social Media Behaviour
High virality of unverified content; WhatsApp and Facebook dominate.
Gendered rumours and doctored images spread rapidly in closed networks with little content moderation.
Civil Society Engagement
There are strong watchdogs and women’s rights groups, but their reach is limited in remote regions.
CSOs are potential allies in countering gendered disinformation, but they need capacity-building and digital tools.

Strategic interventions targeted at curbing gendered disinformation could include:

  • Targeted Media Literacy: Prioritise outreach to women, youth, and rural communities with gender-aware digital education.
  • Safe Reporting Channels: Establish anonymous platforms for female candidates and activists to report deepfake abuse.
  • Narrative Resilience: Support women-led storytelling initiatives to counteract disinformation with authentic voices.
  • Intersectional Monitoring: Track disinformation trends that intersect with gender, ethnicity, and geography to enable more innovative interventions.
  • Regulatory Intervention: A legal framework targeting disinformation in general, including gendered disinformation, with a focus on protecting free speech, while still mitigating harms.
4.2 Ecuador

The Ecuadorian General Election took place on 9 February 2025 to elect a President, the National Assembly, 21 provincial assemblies, and Ecuador’s representatives to the Andean Parliament.

Ecuador has a proportional representation electoral system. Voting is compulsory for citizens aged 18 to 65. For the presidential election, at least 40% of the valid vote is required, and a difference of at least 10% from the second in place candidate is needed. If no candidate achieves this, a second round is held between the two candidates with the most votes. This happened, and a run-off election was held on 13 April 2025, with incumbent President Daniel Noboa re-elected, defeating Luisa González of the Citizen Revolution Movement.

As a country where gang violence has surged, resulting in journalists and media outlets becoming frequent targets of attacks, we ranked Ecuador as having ‘obstructed’ (48/100) civic space in the 2024 CIVICUS Monitor. As an example, following the declaration of a 60-day state of emergency after the escape of a gang leader in January 2024, heavily armed gang members stormed the TC Televisión studio in Guayaquil, interrupting a live broadcast of the El Noticiero news programme. Attackers threatened media workers with firearms, brandished grenades, and fired shots, while it was broadcased live across the country.

Freedom House’s Freedom in the World Report (2025) ranks Ecuador as “partly free.” While the country holds regular, competitive elections, the influence of organised crime and related violence has increased significantly in recent years, affecting the functioning of state institutions and the security of ordinary citizens. Due process violations, attacks on journalists, human rights abuses and official corruption are ongoing challenges.

A mix of positive and negative factors also contributed to this “partly free” ranking, including:

  • Multiple parties compete in Ecuador’s political system, but historically they have mainly been personality-based, clientelist, and weak in governance.
  • Criminal organisations and related violence increasingly constrain Ecuadorians’ political choices. The surge in political violence in 2023 and concerns over illicit campaign financing persisted in 2024. At least three mayors were murdered during the year.
  • Citizens enjoy formal political equality, regardless of race, gender, and other such distinctions, though some disparities in access and influence persist in practice. Electoral regulations mandate that women account for 50% of the candidates on party lists in multimember districts.
  • Ecuador has long been plagued by corruption, with a weak judiciary and lack of investigative capacity in government oversight agencies, which has contributed to impunity.

Ecuador’s Freedom on the Net score is “partly free.” According to Datareportal, internet penetration in Ecuador is 83.6%, with 75% of the population using social media. This high internet penetration rate, however, masks a digital divide. The Freedom on the Net report states that although Internet access has become more affordable in recent years, it remains prohibitively expensive, with average broadband prices in Ecuador still higher than in many other South American countries. This means that while many are online, the online presence may vary, and this certainly affects citizens’ ability to engage in digital activism and participate meaningfully in digital democracy, leaving space for the elite to form their own election narratives relatively unchallenged. This is a significant disinformation vulnerability.

Ecuador’s journalists and communicators face significant threats to their safety, especially when reporting on elections and sensitive topics. These threats are sometimes levelled by state actors or organised crime. Some journalists who report on politically sensitive issues have either been forced into exile due to threats against their physical safety or they self-censor through anonymisation. This is perhaps one of the most significant vulnerabilities to disinformation. Without an open and free media space, there is less rigour to fact-check, contextualise, and challenge false claims and information emanating from the government and other actors. In such contexts, disinformation spreads with little resistance. Restricted media environments also drive citizens to informal channels, such as social media or rumours, where content is harder to verify and more easily manipulated. In such contexts, trust in credible information sources declines, accountability weakens, and election integrity is threatened as manipulative narratives gain greater traction.

Another concern is that while the Ecuadorian government does not employ overt technical censorship, it does so through other means, such as legislative measures, including:

  • A provision in the 2015 Organic Law of Telecommunications grants the President unilateral power to take over telecommunications services during a national emergency. CSOs have repeatedly raised concerns about the provision’s scope and the possibility of government abuse, giventhe law’s vague standards and lack of independent or impartial oversight. It does not, however, appear to have been invoked, but the existing potential remains a threat.
  • Under former President Rafael Correa Delgado, copyright law was frequently used to censor politically sensitive content online. This practice has lessened considerably, but not completely. Journalists have sometimes been pressured to remove content after receiving threats.

On a positive note, digital repression in Ecuador appears to be low, with little evidence of systematic blocking of content. Social media, communications apps, blogs, forums, and circumvention tools are generally accessible. Numerous digital media outlets have emerged over the past decade, and users typically do not need VPNs to access online news. In addition, there are no legal restrictions on digital advocacy or online communities, and social media continued to serve as a tool for social and political mobilisation in Ecuador during the period under review.

The pre-election environmentin Ecuador was marked by violence, both in the lead-up to the first vote and run-off, with candidates routinely subjected to death threats. In this digital age, such threats go hand in hand with smear campaigns, harassment, cyberstalking, misinformation, and disinformation.

Indeed, the online space in Ecuador was so dire and disinformation campaigns so pervasive that Freedom House in 2024 noted an online information space “charged with polarisation,” including:

  • Evidence that inauthentic bot accounts shape online discussions. As an example, following a 2023 presidential debate, social media analysis found that 73% of posts mentioning Noboa were created by bots or potential bots, compared to 72% for González.
  • Troll accounts seeking to support or discredit specific candidates were seemingly deployed across social networks, including on platforms that had previously been less utilised for these purposes, such as WhatsApp.
  • In 2022, Twitter suspended a “botnet” operation composed of 491 inauthentic accounts for engaging in Coordinated Inauthentic Behaviour (CIB) in support of then-President Lasso.
  • Online troll accounts were reportedly deployed on behalf of individuals involved in organised crime.
  • False or misleading content is often spread through digital platforms and social networks, including information about government officials or political candidates, undermining the reliability of the online information environment
  • Political actors used their online platforms to discredit certain journalists.
  • Deepfakes were not identified.

The offline environment was equally dire. On 9 August 2023, a presidential candidate was assassinated after a campaign event. In December 2024, President Noboa discovered a car bomb near a place he had planned a campaign stop. Opposition candidate Luisa González reported death threats and increased her security, while Socialist candidate Pedro Granja suspended his campaign due to an attack. Centrist Jimmy Jairala’s car was shot at the start of his campaign (Boscán, 2025).

Violence fuelled by the drug trade is pervasive in Ecuador, with no sector of society unaffected, from schools and hospitals to polling stations.

Violence seems to be everywhere in Ecuador, affecting its education, health care, and politics. The candidates vying to lead the South American country—considered one of the most violent in the world in recent years—are well aware of this reality. Insecurity is the primary concern of their voters. 7 in 10 Ecuadorians fear going out at night, and the country ranks worst on Gallup’s Law and Order Index, which annually measures the perception of security in 140 countries.

Digital and media literacy in schools faces challenges in being taught and integrated due to the violence students encounter in public spaces. Since 2022, for example, approximately 90,000 children have dropped out of school in Ecuador. Homicide is the top cause of death among minors, and criminal gangs extort students for school access; 20% of children avoid classes out of fear (Boscán, 2025).

CSOs in Ecuador, however, work against disinformation, including organisations such as Fundamedios (press freedom, digital rights), Ecuador Verifica (fact-checking, electoral integrity), Openlab Ecuador (civic tech, disinformation innovation), Observatory of Communication (media analysis, academic research), and Technical University of Loja (media literacy, teacher training).

Deepfake Disinformation Report: Ecuador

Given the above vulnerabilities, it is unsurprising that disinformation was prevalent, including in the post-election period. EU vs Disinfo (2025) reported an attempt by pro-Kremlin operatives to sow doubt about the electoral results in Ecuador. The false claims were rejected by Ecuador’s National Electoral Council, whose conclusions were supported by the Organisation of American States (OAS) electoral mission (EOM).

OAS/EOM is a deployment mechanism of a regional body comprising 35 member states across the Americas, focused on promoting democracy, human rights, security and development. It deploys independent experts to monitor elections and assess legality, transparency, media access and voter participation. Their reports are widely regarded as credible and influential, though the OAS has been accused of bias at times. But the election missions are generally seen as rigorous, impartial, and essential safeguards of democratic standards in the region.

As for deepfake disinformation, based on the available information, it is unfortunately unclear whether deepfakes, as technically defined, were disseminated. Reports rather speak to the dissemination of “AI-generated misinformation” or “AI-manipulated content.”

In their report published in February 2025, the OAS/EOM warned of a rise in the use of AI; 23% of viral disinformation included AI-generated material. Verified false information still circulated in journalists and public figures accounts, possibly due to inadequate source verification.

The EU Observer Mission to Ecuador similarly cautioned against a “widespread

dissemination of manipulated content on all platforms, frequently amplified through proxy accounts, paid-for content and bot farms,” having found the use of AI-generated content increasing throughout the election campaign, and often employed to fuel disinformation and personal attacks against candidates. The content was disseminated across Facebook, Instagram, X and TikTok. The Observer Mission further found:

  • 63 cases of “manipulated” videos and 55 such images, and cloned voices or altered audio on six occasions.
  • 56 of these items were shared by accounts suspected of being trolls or bots, 48 by influencers or content creators, and 27 by ordinary users.
  • While some of the identified AI-generated content was used for satire or genuine campaign promotion, the majority aimed to delegitimise political opponents (91 cases) and spread disinformation (seven cases).

Applying the Deepfake Risk Matrix to Ecuador’s Economic and Digital Ecosystem

For Ecuador, we apply the economic and digital ecosystem variables to understand how the unique conditions in the country could contribute to the proliferation of deepfakes

Interview: Ecuador
CIVICUS discussed Ecuador’s presidential election with Jorge Tapia de los Reyes, Coordinator of the Democracy and Politics Department and the Political Funding Observatory of the Citizenship and Development Foundation (FCD). FCD is an Ecuadorian civil society organisation that promotes participation, citizen monitoring, and open government.

The role of organised citizens was crucial to the success of the democratic process. Through various monitoring and observation initiatives, civil society acted as an effective counterweight to potential irregularities.

The work of civil society went beyond election observation; we are committed to building an informed and critical citizenry. We understand democracy not only as an act of voting, but as a continuous process of education, information, and participation. With this in mind, we set up a system to monitor and verify fake news on social media to combat disinformation and its harmful effects on the electoral process.

Many disinformation campaigns are specifically designed to create fear and apathy and discourage participation. Our work sought to counter these strategies by providing verified information and reminding people that only the National Electoral Council has the legal authority to issue official results or respond to reports of irregularities.

FactorsContextual IndicatorsDisinformation Risk Assessment
Economic Inequality & Informality
High informality rate; persistent urban–rural divide; economic precarity among youth and rural voters
Economic grievances can be weaponised through populist disinfo and AI-generated narratives.
Digital Penetration & Access
83.7% internet penetration; 98.8% mobile connectivity; rural access still lags behind urban centres.
High connectivity enables rapid disinfo spread; rural gaps hinder verification and response.
Digital Literacy & Education
The country has a low ranking in the global digital skills index and unequal access to digital education.
Vulnerable populations, especially youth and older voters, are more susceptible to media manipulation.
Platform Ecosystem
Dominated by WhatsApp, Facebook, and TikTok (74% social media usage)
Encrypted platforms and short-form video apps amplify disinfo with limited moderation.

Strategic interventions targeted at improving the digital ecosystem and other socioeconomic risks could include:

  • Advocating for rural digital infrastructure through targeted investment and regulatory reform to lower internet costs, especially in underserved regions;
  • Supporting civic tech innovation via initiatives which bring together journalists and developers to build AI-powered fact-checking and transparency tools; and
  • Launching national digital literacy campaigns, prioritising youth, rural voters, women and other groups most vulnerable to synthetic media manipulation.
4.3 Germany

The German Federal Election was held on 23 February 2025 to elect the 630 members of the country’s Bundestag (Parliament). The members of the Bundestag are directly elected every four years by German citizens through a mixed-member proportional system. Once the Bundestag is sworn in, it elects the Chancellor from its ranks.

Due to suppression of protest movements in the country, particularly related to Palestine, Germany is rated as having ‘narrowed’ civic space on the CIVICUS Monitor, with a score of 67/100.

During the pre-election period, videos on social media showed cases of officers pushing, punching, and choking non-resisting protesters in Germany. In one case, a protester was injured to the point of losing consciousness and was reportedly not given any medical assistance for 20 minutes. Palestinian solidarity protests also faced bans and obstruction;journalists were reportedly detained while covering these protests and documenting the police use of excessive force.

In its annual Freedom in the World Report 2025, Freedom House ranks Germany as “free” with a 93/100 score, finding that:

Germany is a representative democracy with a vibrant political culture and civil society. Political rights and civil liberties are assured mainly in law and practice. The political system is influenced by the country’s totalitarian past, with constitutional safeguards designed to prevent authoritarian rule.

While Germany has remained firmly committed to liberal democracy and its tenets in the 21st century, the rise of right-wing populism and inflammatory anti-immigrant rhetoric threatens to blemish this record. Such narratives are ripe for disinformation and are at risk of being weaponised to polarise German society.

Freedom House has documented several concerning developments over the past few years in Germany, as follows:

  • While freedom of belief is protected by law, eight states ban headscarves for teachers, and Berlin and Hesse prohibit civil servants from wearing them.
  • Antisemitism has been on the rise. The Ministry of the Interior recorded an exponential increase in attacks on individuals of the Jewish faith.
  • Islamophobia also remains a concern. Attacks against those of the Muslim faith have also increased.
  • In late 2023, after the 7 October Hamas attacks and the war in Gaza, pro-Palestinian protests in Germany were heavily restricted. (By early 2024, some restrictions were challenged in court, and some were overturned, but restrictions on slogans or conditions on protest size remained common. Incidents of police brutality against pro-Palestine protesters have also been reported.
  • Attacks on refugees and refugee housing declined from approximately 3,500 such cases in 2016, but have nonetheless continued to occur in the country.

Germany is classified as “free” according to Freedom on the Net, with a score of 77 out of 100, which reflects near-universal internet access (93% penetration per Datareportal), as well as a strong online media environment (77% of Germans use social media) and a strong and fair judiciary. However, the country faces ongoing challenges from Russian disinformation campaigns and cyberattacks, as well as accusations of censorship. Several related developments are as follows:

  • The government occasionally blocks websites or other online content. A large number of Russian-linked websites and accounts are blocked because of alleged propaganda targeting civil society.
  • German online spaces are marked with significant content manipulation, which in some cases has been linked to the far right or to foreign interference campaigns, primarily originating in Russia. Examples include:
    • Far-right actors have spread false and misleading information online. Members of Parliament for the right-wing populist Alternative for Germany (AfD) party have used AI image-generation tools to spread xenophobic messages.
    • Disinformation campaigns from Russian actors also continue to target Germany, and some have involved German politicians. A March 2024 investigation conducted by the Czech Republic’s Security Information Service reported on a Russian campaign that had allegedly approached and paid European politicians, including members of the Bundestag, to question the “territorial integrity, sovereignty, and freedom of Ukraine” on the pro-Kremlin news site Voice of Europe.
    • In January 2024, the country’s foreign office uncovered 50,000 fake accounts on X that were posted in German and which spread messages critical of Germany’s support for Ukraine and the current German government.

Germany has one of the most tightly regulated digital spaces in the world, shaped by both national laws, such as the NetzDG, and EU-wide frameworks, such as the GDPR and Digital Services Act (DSA). There have been concerns of over-regulation from various CSOs and the burden of bureaucratic compliance with laws such as the GDPR.

Ahead of the 2025 election, the EU’s AI Act was passed and viewed as a significant step to tackling deepfake disinformation. The AI Act defines a “deep fake” as an “AI‑generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.”

It requires the following compliance:

  • Deepfake developers and users must disclose AI-generated content to prevent misinformation.
  • AI content should be labelled through classification or watermarking.
  • Deepfakes that may impact rights or society, such as political manipulation or defamation, are high-risk and face stricter regulations.
  • Traceability and accountability require keeping records of deepfake creation for possible origin tracking.
  • Malicious uses for social scoring or illegal surveillance are banned as unacceptable risks under the AI Act.

Failure to comply with these requirements can trigger steep penalties, with fines of up to €35 million or 7% of a company’s global annual turnover for serious violations. By attaching such weighty sanctions, the AI Act signals the EU’s intention to treat deepfake transparency as a matter of democratic integrity.

As Germany headed to the polls in 2025, key issues included: the rising cost of living, an energy crisis, and declining confidence in government, with faith in the German government having dropped to 50% in 2024 – its lowest point in over a decade and down considerably from a high point of 65% in 2020 (Gallup, 2025). Low trust in government marks a significant vulnerability to disinformation. When audiences lose trust in government, this can lead them to seek information elsewhere, creating an information vacuum for malign actors to fill.

The rise of the far-right poses another concern. As Election Day approached, Germany entered what was described as an “unusually tense” campaign (ACLED, 2025) due to a political landscape that had shifted from centrism to greater polarisation, as evidenced by widespread protests before the elections. Weeks before the election, hundreds of thousands of Germans protested nationwide after the centre-right Christian Democratic Union relied on support from the far-right Alternative for Germany (AfD) to push a parliamentary motion for stricter migration laws.

ACLED recorded a 17% rise to over 4,650 distinct demonstration events, with 1,450 demonstrations opposing right-wing extremism, and which were predominantly led by grassroots movements, CSOs, and centrist parties. This was a welcome pushback, reflected by the AfD’s loss of support after the ballots were counted. Concerns over disinformation were high amongst the German public, with a poll finding that 88% feared manipulation from foreign actors or governments (Shelton, 2025). This is a strong resilience factor, a signal that the population is aware of and inoculated against the threat of disinformation.

Deepfake Disinformation Report: Germany

There are no reports thus far that comprehensively cover the amount of disinformation in general and, particularly, the number of deepfakes in Germany. This could be a sign of “deepfake fatigue” or “no news is good news” after the 2024-2025 super election period. Or perhaps that the AI Act, and similar actions, represent a successful deterrent in the EU.

Some data can be collated from two sources, outlined below, but they are somewhat limited in detail.

  1. NewsGuard and Correktiv
    Newsguard is a private, US-based company that provides clients with tools to evaluate the credibility of online news and information sources. In partnership with Correktiv, a German independent newsroom, Newsguard identified 22 false claims related to the election in Germany, including disinformation by Russian actors targeting mainstream political parties that support NATO and Ukraine.Included was a network of 102 AI-generated German-language fake news websites, allegedly linked to US fugitive-turned-Kremlin-propagandist John Mark Dougan. Using AI tools like OpenAI’s ChatGPT and DALL-E 3, Dougan is said to have created over 160 fake news sites, disseminating false narratives to millions worldwide, and to Germans, content favourable to the AfD.Dougan, crowned 2024 Disinformer of the Year by NewsGuard, is a former Florida deputy sheriff, now a “source of information on Russia”, who fled the US to Russia while facing a slew of charges, including extortion. He has denied any ownership of the websites.The websites bear the names of well-known, defunct German media brands and are filled with AI-generated content. The sites look similar and publish articles containing false information about German politicians who are pro-NATO and pro-Ukraine, particularly from the Green Party, which is known for its strong support for Ukraine and the green transition.There is no reference to the content having been deepfakes; instead, it points to AI-generated content and images, with less emphasis on video and audio, to populate the “news stories.”
  2. The Institute for Strategic Dialogue (ISD)  The ISD uncovered a coordinated network on X (formerly Twitter) spreading disinformation about German politicians and election-related terror threats. The network of approximately 50 accounts shared traits typical of pro-Russia campaign influence operations, disseminating false claims through videos designed to look like they come from media outlets, law enforcement agencies, and academics. At times, AI-generated deepfakes are used for audio and visual content.A secondary network comprising more than 6,000 accounts dedicated to reposting content to ensure reach and virality exhibited the hallmarks of coordinated inauthentic behaviour.Between 27 and 30 January 2025 alone, the network produced 19 election-related videos, marking an escalation in activity. These included disinformation about supposed terror threats tied to the elections and false accusations of corruption and paedophilia targeting prominent political figures, including CDU chancellor candidate Friedrich Merz, Die Linke’s Janine Wissler, and Armin Laschet, the former CDU chancellor candidate. To boost credibility, the network branded its AI-manipulated videos with logos from respected outlets such as Deutsche Welle, BBC, and Sky News, while also impersonating government bodies and universities.The tactics, AI-altered audio, fake captions, QR codes and tagging of journalists, mirror those of the Russia-aligned “Operation Overload” (also known as Matryoshka). Although the campaign’s domestic impact appears limited, with most engagement driven by bot networks, the 48 core accounts nonetheless attracted 2.5 million views, with engagement tripling in January 2025. Interestingly, the disinformation was disseminated not in German but in English, Spanish, and Arabic, suggesting its true objective was less about swaying German voters and more about undermining confidence in German democracy among international audiences, thereby aligning with broader Kremlin influence strategies across Europe.

Interview: Germany

CIVICUS discussed AI governance challenges with Federica Marconi, researcher in the multilateralism and global governance programme at Italy’s Institute of International Affairs, a non-profit think tank that promotes awareness of international politics and contributes to the advancement of European integration and multilateral cooperation. Marconi noted the need for the greater inclusion of civil society in the implementation of AI governance. Part of AI governance includes regulations against the use of deepfakes.

Given AI’s impact across sectors, legitimate regulation requires meaningful civil society inclusion. Civil society organisations provide technical expertise, amplify excluded groups’ perspectives and advance transparency and accountability. Their participation is crucial to prevent decisions from being dominated by powerful private stakeholders that are driven by economic interests rather than the public good.

Civil society’s role is widely acknowledged as essential, but it faces two problems: getting access and having influence.

Access to multilateral forums varies. Some arenas restrict civil society participation entirely; others have established structured channels. But even when these mechanisms exist, access alone isn’t enough: having a seat at the table doesn’t guarantee that civil society voices will shape decisions.

The solution requires overcoming the notion that state leadership and stakeholder participation are competing legitimacy models. Civil society perspectives can be incorporated through governments via national consultations, advisory bodies or official delegations, while civil society can also engage independently with multilateral institutions through established participation channels.

Applying the Deepfake Risk Matrix to the Regulatory Framework in Germany

Germany’s regulatory context illustrates how proactive risk management can both anticipate and mitigate deepfake disinformation threats. Together with the EU, Germany has built a forward-looking legal framework that recognises deepfakes as a systemic risk rather than an afterthought.

That being said, gaps remain; the AI Act still needs German operationalisation, and platform compliance is uneven, leaving spaces where manipulative content can still circulate. The German case thus shows how robust regulation can address critical vulnerabilities before they escalate, while also underscoring the importance of timely, coordinated risk identification.

Regulatory Framework Matrix – Germany (2025)

DimensionContextual IndicatorsDisinformation Risk Assessment
Legal Framework
Germany applies the EU AI Act, with core provisions already in force since February 2025. Domestic implementation is underway via the draft KI Market Surveillance Act (KIMÜG).
A strong legal foundation is emerging, but enforcement gaps remain until the full rollout in 2026.
Constitutional Safeguards
Rooted in lessons from Germany’s totalitarian past, the country has robust protections against authoritarianism and propaganda.
High resilience to state-sponsored disinformation; strong judicial oversight and civil liberties.
Platform Regulation
Relies on the EU Digital Services Act (DSA) and national consumer protection laws for platform accountability.
Platforms are legally bound to moderate harmful content, though enforcement still varies by provider.
AI-Specific Oversight
No standalone German-specific AI law yet; oversight will fall to market surveillance authorities under KIMÜG.
Institutional capacity is growing, but current fragmentation limits proactive disinformation detection.
Civil Society & Media Freedom
Germany scores 93/100 according to Freedom House; the press and watchdog ecosystem remains strong and active.
Strong resilience through media and fact-checking, with civil society playing a central watchdog role
International Engagement
Germany is a key driver of EU efforts on AI ethics, human rights, and democratic safeguards.
High global influence helps shape international norms around AI and disinformation governance.
4.4 Singapore

On 3 May 2025, Singapore went to the polls to elect 97 Members of Parliament through a mix of Single Member Constituencies or Group Representation Constituencies in a parliamentary system. Once Parliament is constituted, the President (who holds a largely ceremonial role and is elected in a separate election) appoints the head of the majority political party as the Prime Minister.

We rate Singapore’s civic space as ‘repressed’ in theCIVICUS Monitor’s People Power Under Attack report. There are ongoing concerns over the use of restrictive laws and the harassment of human rights defenders. The 2019 Protection from Online Falsehoods and Manipulation Act (POFMA) grants excessive powers to the government and has been used to target activists and critics, and block websites. In addition, there are ongoing restrictions on peaceful assembly under the 2009 Public Order Act (POA).

Freedom House’s Freedom of the World 2025 ranks Singapore as “partly free,” with a score of 48/100. One party, the ruling People’s Action Party (PAP) and the powerful Lee family, have dominated Singaporean politics since the country gained independence in 1959. Due to budget constraints, Freedom House was unable to publish a full report on Singapore; therefore, a similar report, Reporters Without Borders (RSF) World Press Freedom Index (2025), is relied upon to ensure some consistency across the case studies. While it measures press freedom, this is an indicator critical to assessing whether an information landscape is resilient or vulnerable to disinformation.

RSF ranks Singapore low across its indicators, placing it 126th out of 180 countries worldwide in terms of press freedom. The RSF report states, “While Singapore boasts of being a model for economic development, it is an example of what not to be regarding freedom of the press.” Specific concerns include:

  • Political Context: The PAP appoints media board members and editors who must enforce the official position of the government. Authorities control foreign media access in Singapore.
  • Legal Framework: Singapore’s “anti-fake news” law permits the government to correct online content deemed false or that affects public confidence. The Foreign Interference (Countermeasures) Act of 2023 expands state authority over media.
  • Economic Context: There is a lack of pluarility in the media; the sector is dominated by two groups: MediaCorp (state-owned) and SPH Media Trust (government-funded). Independent outlets often self-censor due to legal and financial pressures.
  • Socio-cultural Context: “Out of bounds markers” restrict coverage of sensitive issues, resulting in self-censorship and the predominant dissemination of government-approved views on topics such as labour and human rights.
  • Safety and Security: Journalists and bloggers face lawsuits, defamation claims, and smear campaigns from ruling party figures and supporters for their critical reporting.

On internet freedom, Singapore does not fare much better in Freedom House’s 2024 Freedom on the Net report. While boasting an internet penetration even higher than Germany’s at 95.8% per Datareportal, (with 88.2% of the population using social media, Singapore exemplifies that internet access does not equate to the “democratisation” of technology. Several examples are as follows:

  • The government routinely orders internet service providers (ISPs) to block access to websites it deems to have published “false information” in violation of the terms of its “anti-fake news law”. As an example, in May 2023, Singapore’s Ministry of Communications and Information (MCI) ordered the independent news site Asia Sentinel to publish a correction notice after it reported on alleged state harassment of political dissenters. Asia Sentinel placed the notice under an editor’s note, affirming it stood by the story; however, authorities deemed it as non-compliant. The following month, the Infocomm Media Development Authority (IMDA) instructed ISPs to block access to the site under Section 11 of POFMA. The website appears to still be blocked at the time of writing.
  • The Online Criminal Harms Act (OCHA) was passed in July 2023, giving authorities new powers to block online content, services, and applications.
  • The Foreign Interference (Countermeasures) Act (FICA) of 2021 came into effect in December 2023, granting the authorities broad latitude to restrict online activity. It also allows the authorities to designate individuals and organisations as “politically significant persons”, which requires them to submit regular reports to the government on their foreign affiliations and donations from foreign citizens.
  • The process for restricting online and digital content lacks explicit provisions regarding transparency. Recent legislation intended to address online falsehoods, foreign influence operations, and criminal activity online does not provide independent appeals mechanisms.
  • Self-censorship occurs among journalists, commentators, and users who are typically aware that some types of speech or expression may result in civil or criminal consequences.
  • While opposition parties can use social media for campaign purposes, their activities are significantly constrained, however, by police investigations and arrests of those participating in online activism

There are very few independent civil society organisations in Singapore that investigate or fact-check disinformation. In fact, only one such body, BlackDot Research, appears to be more of a market research firm than an independent CSO.

The true extent of disinformation and deepfake threats is, therefore, difficult to quantify. There are often vague references to “foreign manipulation,” with very few empirical studies from CSOs to substantiate these. Most reporting on the subject comes through government channels, and in an environment where media and internet freedoms are tightly restricted, it is often unclear where the truth lies.

Rather than plural independent assessments, Singapore relies heavily on legislation and state-driven interventions, including a raft of laws empowering authorities to determine what constitutes a falsehood. In 2024, this framework was expanded to include a law on deepfakes, further consolidating state oversight of online information.

On 15 October 2024, Singapore’s Parliament passed an amendment to the Elections Act, prohibiting the publication, boosting, sharing and reposting of deepfake content depicting election candidates, as well as banning the use of deepfakes during general elections. Outside of elections, deepfakes are regulated by broader laws, such as the “anti-fake news law,” or by criminal statutes if they involve defamation, fraud or harm.

The lead up to the 2025 election was termed a “foregone conclusion” for PAP, despite the fact that the Party had not yet faced such a “charismatic politician” as Pritam Singh who leads the opposition Workers’ Party (Jey, 2025).

…the traditional pattern of Singaporean elections that features the PAP as its main protagonist looks set to continue, as political leanings are still not defined by ideological commitments but instead by how voters position themselves for or against the ruling party. The PAP’s ultimate victory – winning a majority of the 97 electable seats in Parliament – is a basically foregone conclusion. Still, one question is whether the ruling party will be able to retain its two-thirds supermajority. The party looks very likely to do so.

Deepfake Disinformation Report: Singapore

The government-linked outlet CNA reported a single case of deepfake content during the campaign period, noting that 73 TikTok videos contained “digitally generated or manipulated visuals of prospective candidates,” potentially violating the new deepfake law. This appeared more like a compliance check for breaches than a systematic monitoring effort. Beyond this, there are no confirmed reports of deepfake electoral disinformation in Singapore, though whether this reflects an actual absence or simply a lack of independent scrutiny remains unclear.

Applying the Deepfake Risk Matrix to Relevant Actors in Singapore

Singapore is a complex study and masterclass in state legal containment and narrative engineering. Not all is lost, however; there is still room for external actors and civic ingenuity to carve out space for truth.

Actors Matrix – Singapore (2025 Election Context)

Actor TypeKey EntitiesRole in Disinformation Ecosystem
State Institutions
Ministry of Communications and Information, IMDA, Ministry of Home Affairs
Enforce POFMA and FICA; issue correction notices; monitor foreign interference; shape narrative control
Political Actors
People’s Action Party (PAP), opposition parties (e.g., PSP, WP)
PAP dominates narrative via state-linked media; opposition faces constraints and risks of censorship.
Media Outlets
Mediacorp (Channel News Asia), The Straits Times, Today Online
State-influenced; high public trust but limited editorial independence; alternative voices suppressed
Tech Platforms
Meta, TikTok, WhatsApp, YouTube
High penetration; platforms are subject to takedown orders and fines under the deepfake law.
Civil Society & Academia
NUS, SMU, RSIS, DISA, select NGOs
Limited space for dissent; some academic voices raise alarms but operate within tight boundaries.
Diaspora & Expats
Singaporean activists abroad, independent journalists, and OSINT researchers
It is increasingly important to counter state-filtered information and amplify alternative narratives.

Strategic Interventions – Singapore:

  • Invest in OSINT networks: Support independent researchers and diaspora-led initiatives that monitor disinfo campaigns, especially those targeting opposition figures or civil society.
  • Strengthen expat-led media ecosystems: Fund and amplify platforms run by Singaporeans abroad that offer alternative coverage and fact-checking outside state control.
  • Build regional disinfo coalitions: Partner with Southeast Asian watchdogs to track cross-border influence operations and share threat intelligence.
  • Support digital resilience training: Equip opposition parties, youth activists, and journalists with tools to detect deepfakes, resist manipulation, and navigate legal constraints.
  • Leverage encrypted civic channels: Use secure messaging and decentralised platforms to distribute verified content and counter disinfo narratives without triggering censorship.
Social Factor Assessment Gendered Disinformation Risk
Media Trust & Literacy
Moderate trust in traditional media, but high reliance on social media, with different levels of digital literacy
Women and rural voters are more vulnerable to manipulated narratives due to limited media literacy.
Social Polarisation
Rising political tensions, especially between SWAPO and emerging opposition parties
Female candidates may be targeted with divisive deepfakes to exploit identity politics.
Cultural Norms & Gender Roles
Patriarchal norms persist, especially in rural areas; women are underrepresented in politics.
Deepfakes often weaponise gender stereotypes to discredit women’s leadership and credibility.
Minority Representation
The Ovambo majority dominates; the San and other minorities face systemic exclusion.
Minority women are doubly excluded and vulnerable to invisibilisation and targeted misinformation.
Social Media Behaviour
High virality of unverified content; WhatsApp and Facebook dominate.
Gendered rumours and doctored images spread rapidly in closed networks with little content moderation.
Civil Society Engagement
There are strong watchdogs and women’s rights groups, but their reach is limited in remote regions.
CSOs are potential allies in countering gendered disinformation, but they need capacity-building and digital tools.
Factors Contextual Indicators Disinformation Risk Assessment
Economic Inequality & Informality
High informality rate; persistent urban–rural divide; economic precarity among youth and rural voters
Economic grievances can be weaponised through populist disinfo and AI-generated narratives.
Digital Penetration & Access
83.7% internet penetration; 98.8% mobile connectivity; rural access still lags behind urban centres.
High connectivity enables rapid disinfo spread; rural gaps hinder verification and response.
Digital Literacy & Education
The country has a low ranking in the global digital skills index and unequal access to digital education.
Vulnerable populations, especially youth and older voters, are more susceptible to media manipulation.
Platform Ecosystem
Dominated by WhatsApp, Facebook, and TikTok (74% social media usage)
Encrypted platforms and short-form video apps amplify disinfo with limited moderation.
Dimension Contextual Indicators Disinformation Risk Assessment
Legal Framework
Germany applies the EU AI Act, with core provisions already in force since February 2025. Domestic implementation is underway via the draft KI Market Surveillance Act (KIMÜG).
A strong legal foundation is emerging, but enforcement gaps remain until the full rollout in 2026.
Constitutional Safeguards
Rooted in lessons from Germany’s totalitarian past, the country has robust protections against authoritarianism and propaganda.
High resilience to state-sponsored disinformation; strong judicial oversight and civil liberties.
Platform Regulation
Relies on the EU Digital Services Act (DSA) and national consumer protection laws for platform accountability.
Platforms are legally bound to moderate harmful content, though enforcement still varies by provider.
AI-Specific Oversight
No standalone German-specific AI law yet; oversight will fall to market surveillance authorities under KIMÜG.
Institutional capacity is growing, but current fragmentation limits proactive disinformation detection.
Civil Society & Media Freedom
Germany scores 93/100 according to Freedom House; the press and watchdog ecosystem remains strong and active.
Strong resilience through media and fact-checking, with civil society playing a central watchdog role
International Engagement
Germany is a key driver of EU efforts on AI ethics, human rights, and democratic safeguards.
High global influence helps shape international norms around AI and disinformation governance.
Actor Type Key Entities Role in Disinformation Ecosystem
State Institutions
Ministry of Communications and Information, IMDA, Ministry of Home Affairs
Enforce POFMA and FICA; issue correction notices; monitor foreign interference; shape narrative control
Political Actors
People’s Action Party (PAP), opposition parties (e.g., PSP, WP)
PAP dominates narrative via state-linked media; opposition faces constraints and risks of censorship.
Media Outlets
Mediacorp (Channel News Asia), The Straits Times, Today Online
State-influenced; high public trust but limited editorial independence; alternative voices suppressed
Tech Platforms
Meta, TikTok, WhatsApp, YouTube
High penetration; platforms are subject to takedown orders and fines under the deepfake law.
Civil Society & Academia
NUS, SMU, RSIS, DISA, select NGOs
Limited space for dissent; some academic voices raise alarms but operate within tight boundaries.
Diaspora & Expats
Singaporean activists abroad, independent journalists, and OSINT researchers
It is increasingly important to counter state-filtered information and amplify alternative narratives.