FUTURE-PROOFING ELECTIONS AGAINST DEEPFAKE DISINFORMATION

PART 3: PREVALENCE OF DEEPFAKE DISINFORMATION IN 2024/25 ELECTIONS

In this part of the paper, we use reliable data to quantify the prevalence of deepfake disinformation during the 2024 and 2025 global election cycles. Reliable data is needed to move beyond alarmism, anecdotal evidence, and speculation, and to reveal the true frequency, forms, and intensity of deepfake disinformation deployment during elections, as well as the range of methods, motivations, and their associated campaigns. This understanding is necessary to craft evidence-based mitigation strategies.

3.1 GenAI and Deepfake Landscape ahead of the 2024 Super Election Year

Ahead of the 2024 electoral cycle, GenAI platforms and their deepfake technology had been on the market for a little over a year. OpenAI’s ChatGPT leads the global text AI market, while DALL-E leads the market for realistic image generation. Google’s Gemini competes in both text and image synthesis, while Midjourney gains traction for highly realistic images used in both art and misinformation.

Video tools like Synthesia and DeepBrain AI allow for the creation of AI avatars and virtual anchors for media outlets. Apps such as Reface and DeepSwap popularise face-swaps and deepfakes among general users.

In audio, voice cloning tools have become mainstream. ElevenLabs dominates with hyperrealistic, multilingual voices, joined by open-source projects like TorToiSe and VALL-E, which enable voice replication from minimal data. While these technologies aid accessibility and creative work, they can also lead to impersonation scams and disinformation.

Governments, policymakers, academics and experts almost unanimously agree and are concerned that a massive wave of false information is flooding the information ecosystem.

A small number of outliers believe this to be alarmist. In what has become a canonical paper, Felix M. Simon, Sacha Altay, and Hugo Mercier (2023) argue that the threat of widespread misinformation was “overblown,” and list their reasons as follows:

  • Quantity of misinformation: GenAI only dramatically lowers the cost and effort required to produce manipulative content; the spread of misinformation has always been limited more by demand than by supply. A larger pool of misinformation does not automatically translate into greater impact if audiences are not receptive.

  • Quality of misinformation: Realism is not the sole determinant of persuasiveness. Emotional resonance, narrative fit, and audience predispositions often matter more than surface quality. A slick deepfake might still fall flat if it does not align with what an audience is primed to believe.

  • Personalisation of misinformation: While GenAI can lower barriers to producing such content, it does not fundamentally introduce new microtargeting capabilities. The mechanics of tailoring messages to digital identities already existed through online advertising and data analytics.

They were right.

It is worth reflecting on their last point about the personalisation of misinformation. There were concerns that GenAI would enable greater microtargeting of disinformation, a perspective that may have been founded on alarmism. GenAI does not introduce new microtargeting methods; it merely lowers the cost and effort of tailoring messages to inferred digital identities, psychographics, and other similar data. Its current abilities are limited in producing synthetic media that can be tailored based on identified characteristics such as demography, income, location, etc(Simchon et al., 2024).

Microtargeting itself is not inherently wrong; political parties have used it as a tactic to reach potential voters with tailored messaging before and after the advent of the internet. It is when digital identity data is obtained unethically or illegally and used to manipulate voters with personalised, fear-based disinformation, as seen in the Cambridge Analytica scandal, that trouble arises.

This, of course, is a GenAI ability that will need to be monitored in the coming years, and appropriate mitigation strategies will need to be devised and implemented.

3.2 Deepfake Disinformation during the 2024 Super Election Year

The International Panel on the Information Environment (IPIE) – an independent and global science organisation providing scientific knowledge about the health of the world’s information environment – found just 215 instances of AI-generated deepfake electoral disinformation across all 50 countries with competitive national elections in 2024 (IPIE, May 2025).

Separately, the Knight Institute reviewed 78 cases of AI use in the WIRED AI Elections Project,which tracked political AI content during the 2024 global elections, and found no deceptive intent in 39 instances (Kapoor & Narayanan, 2024).

This was much lower than expected, as confirmed by Meta’s reports. Meta closely monitored the potential use of deepfakes by covert influence campaigns and found only incremental productivity and content-generation gains, accounting for less than 1% of all fact-checked misinformation on its platforms.

Unfortunately, other social media platforms did not publish reports detailing the full scope of the proliferation of deepfake disinformation. Even Meta’s was a little thin on the details. Data access remains an area ripe for civil society advocacy. Platforms must publicly furnish complete datasets on the nature and prevalence of all disinformation, including deepfakes, to facilitate independent monitoring and informed interventions. This would include granular data on content reach and engagement, amplification networks, audience exposure and reactions, as well as transparency in detection, labelling, and enforcement measures.

Despite what was relatively low deployment of deepfake disinformation across the globe, it is still worth reflecting on the incidents and their nature, as noted by the IPIE:

  • 80% of countries with competitive elections saw GenAI used during campaigns; India and the USA reported the highest number of incidents (30 each). GenAI incidents were in double digits across eight countries (16%), ranging from 10 in France to 30 in India and the USA.

  • The countries with the fewest instances (some as low as one) were Belgium, El Salvador, Finland, Lithuania, Madagascar, Maldives, Mauritania, Mozambique, North Macedonia, Palau and Panama.

  • The remaining 20% of countries with no identified GenAI usage data are primarily those with smaller populations.

  • Explanations for the lack of recorded GenAI usage targeting elections in both groups of countries include less journalistic coverage of these countries’ elections and lower internet penetration rates.

  • An additional explanation could be restrictions on freedom of expression, which could lead to less user-generated content.

  • The original source of GenAI-generated content was unknown in 46% of cases. Of untraceable cases, 79% involved suspected political manipulation.

  • Foreign actors were identified in 20% of cases, all linked to malign uses, such as Russian and Chinese coordination.

  • Paid commercial actors were involved in 6% of the cases. In 31% of these cases, paid commercial actors worked in tandem with partisan groups, political parties and candidates, and/or foreign actors, whereas 69% involved paid commercial actors acting alone.

  • Constructive uses were also noted: 38% of national party and candidate-related GenAI incidents were beneficial, with 16% focused on civic outreach or accessibility.

Comprehensive data on the detection of electoral disinformation deepfakes during the 2025 election cycle is unavailable, as news coverage has notably declined. However, according to the CIGI (2025), the 2025 global elections confirmed key findings from the IPIE report, namely that:

Ultimately, AI is not a stand-alone disruptor but rather a powerful new layer in existing influence operations, with the potential to outpace rules and regulations if not managed appropriately.

Altay and Mercier argued in 2025, as they had in 2023, that concerns about GenAI threats were exaggerated and cautioned that alarmism over GenAI could distract from ongoing issues, such as deepfakes targeting women and excluded groups. They caution:

By overemphasizing the risks of GenAI in the context of elections, we risk overlooking the broader, more insidious ways in which GenAI is misused, such as enabling targeted harassment and amplifying harmful biases. These include the harassment of women and minorities. The creation and distribution of AI-generated fake nudes, mostly targeted at females, is a form of gendered violence that seeks to silence women in public life and can be used to humiliate, discredit, and threaten women, which may have a chilling effect on their participation in politics. Similarly, minorities are targeted by AI-assisted harassment campaigns, including racially biased or xenophobic attacks that are amplified through social media. These targeted campaigns undermine efforts to build inclusive political spaces.

3.3 Resilience Factors against Deepfake Disinformation

In addition to the reasons identified by the IPIE, such as low internet penetration rates, limited media coverage, and digital authoritarianism, as well as those highlighted by Simon, Altay, and Mercier, we explore other reasons that could have been resilience factors limiting the spread of deepfake disinformation.

These must not be seen as definitive explanations but rather as a summary of the collective measures implemented before elections that may have played a resilience role. The level of concern and corresponding preparation for disinformation threats in this context was unprecedented. It may signal the need for continued proactive planning.

3.3.1 Public Awareness of Deepfake Disinformation

There appears to have been high level of awareness across the globe of the threat posed by AI and disinformation, as revealed by a 29-country Ipsos survey (Ipsos, 2023). The survey data has limitations that extend beyond the typical constraints of opinion polling and it was conducted in mid-2023, so the conditions and perceptions may have changed since then.

In the survey – Global Views on AI and Disinformation – Ipsos sought to measure how people around the world perceived the risks posed by AI-generated misinformation. It surveyed over 21,000 people in 29 countries, with three-quarters believing that AI makes it easier to produce realistic disinformation and just over half believing it would worsen the problem of disinformation. Despite this, most were confident they could spot fake content, revealing a potential gap between perception and reality. Many also felt that political and media dishonesty have increased over the past thirty years, indicating a broader trend of less reliance on established institutions for information and a growing reliance on sources such as social media influencers for “alternative truths,” which can lead to inadvertent consumption of and belief in misinformation.

The chart below contains the surveyresults for Global South countries, where respondents indicated to what extent they agree or disagree with these two statements: statements

  • “Artificial intelligence is making it easier to generate very realistic fake news stories and images.”

  • “Artificial intelligence will make misinformation and disinformation worse.”

CountryIs GenAI making it easier to create realistic disinformation? (% of respondents who agree)Will GenAI make disinformation worse? (% of respondents who agree)
South Africa
77%
52%
Indonesia
89%
45%
Peru
82%
46%
Chile
81%
53%
Singapore
80%
46%
Thailand
78%
42%
Colombia
77%
58%
Argentina
77%
45%
Malaysia
76%
51%
Mexico
75%
42%
Brazil
74%
51%
India
66%
52%
It is unlikely that these figures accurately reflect the Global South region as a whole. For example, the survey only covers one African country, despite the continent’s vast differences in internet penetration rates. That data, however, is instructive and gives a viewpoint on perceptions of GenAI harm.

3.3.2 Platform Readiness

Another factor that may have limited the scope of AI-generated deepfakes is the preparedness of social media platforms. Anticipating a surge in synthetic content, major platforms introduced policies, detection systems, and labelling initiatives to demonstrate readiness. However, the level of preparedness was uneven; some companies invested in AI detection and partnerships with fact-checkers, while others relied heavily on voluntary codes and vague commitments.

It must be noted that much of the information about these measures comes from the platforms themselves. Other than Meta, which released limited information, no other platform provided details on the success — or lack thereof — of its readiness measures. This raises questions about whether these strategies were designed to safeguard electoral integrity or to pre-empt regulatory scrutiny.

PlatformPreparedness MeasuresLimitations
Meta (Facebook, Instagram, WhatsApp)
Updated misinformation policies, partnered with fact-checkers, invested in AI to detect synthetic media, and launched “Made with AI” labels in 2023
Detection capacity remains limited; enforcement is inconsistent; and the system is heavily reliant on self-reporting
TikTok
Introduced new rules banning harmful AI-generated content, added content labels for synthetic media, and partnered with fact-checking organisations in some regions
Limited transparency on enforcement; region-specific coverage is uneven
X (formerly Twitter)
Committed to “community notes” for misleading AI-generated media; voluntary alignment with the EU’s Code of Practice on Disinformation
Scaled back trust & safety teams, enforcement is inconsistent, and collaboration with researchers has been reduced
YouTube
Announced it would require disclosure of altered or synthetic content and expanded misinformation policies to cover AI-generated deepfakes
Labels rely on uploader honesty, but their proactive detection capacity is limited.
Google (Search & Ads)
Banned political campaigns from using AI-generated content in ads without disclosure; improved detection for manipulated media
Enforcement is mainly ad-focused, not organic content; the impact on the broader ecosystem is unclear.
Snapchat
Rolled out AI content guidelines; partnered with third-party safety groups to track harmful synthetic media
Smaller scale than rivals; limited reporting on election-related enforcement

3.3.3 Regulatory Readiness

Ahead of the 2024 elections, many governments and policymakers acknowledged the dangers posed by deepfake disinformation and began developing strategies to address it. Efforts included detecting, labelling and, in some cases, restricting AI-generated content through both voluntary industry agreements and new laws. These varied initiatives signal growing acknowledgement that deepfakes pose serious risks to election integrity and democracy.

For comparative purposes, the following list of legislation and policies across different jurisdictions includes those that did not hold elections in 2024/25. The elements that worked and did not work could be extracted from each model applied or under consideration. This is another area where civil society advocacy will be particularly beneficial, namely informing of what successful legislation and policies exist or are under consideration.

Country/RegionLegislation/Action TakenPenalties & EnforcementEffects & Impact
European Union
The EU AI Act (2024) was passed and regulates high-risk AI, including deepfakes and election disinformation.
Fines up to €35M or 7% of global turnover for serious violations
Boosted transparency; inspired global policy models; platforms adapting rapidly
Ukraine
Proposed AI law on counter-disinformation; targets hybrid warfare and foreign interference
Criminal liability, content takedowns, and platform restrictions
Deter foreign manipulation; strengthen national security response
United States
State-level laws to regulate deepfakes in elections and impersonation
Civil/criminal penalties, fines, and imprisonment, depending on intent
Platforms preemptively restrict content, leading to growing pressure for federal action.
China
Deep Synthesis Regulation passed in 2023; tackles deepfakes by requiring all synthetic content to be clearly labelled and traceable. Providers must obtain explicit consent before editing personal attributes, and platforms are obligated to detect, disclose, and remove harmful or misleading AI-generated content.
Fines, platform bans, and criminal charges for harmful or political misuse
There has been an improvement in traceability, but critics warn of censorship risks under broad enforcement.
South Korea
Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness
Protection law establishes national oversight and safety infrastructure to prevent harmful uses of AI, including deepfakes.
Expected fines and imprisonment; legislation is still pending implementation.
Raises public awareness; platforms preparing for stricter compliance
India
Draft AI Accountability and Ethical Use Bill;targets AI-generated disinformation by recommending mandatory labelling, licensing for content creators, and prosecution mechanisms for deepfake misuse.
Penalties are under development and are expected to include moderation mandates.
Sparked debate on free speech vs. disinformation control; draft law still evolving
Singapore
The Protection from Online Falsehoods and Manipulation Act (POFMA) requires correction notices or takedowns for AI-generated disinformation, including deepfakes, and empowers authorities to act swiftly against misleading content.
Up to SGD 100,000 for individuals and SGD 1 million for platforms or entities that fail to comply with directives
Strong deterrence; emphasis on public education and platform responsibility
Japan
Act on Promotion of Research and Development and Utilisation of Artificial Intelligence-Related Technologies (March 2025)
Encourages safe AI use and allows government oversight of AI misuse, including disinformation, though it does not impose specific penalties
Focus on innovation balance; critics call for stronger enforcement mechanisms.
Australia
Combatting Misinformation and Disinformation Bill 2024; empowers the Australian Communications and Media Authority (ACMA) to regulate harmful AI-generated content, including deepfakes, but was withdrawn due to concerns over free speech and lack of political support.
None currently
High public concern; legal uncertainty; platforms urged to self-regulate.

3.3.4 Civil Society Readiness

Elections are integral components of democracy, enabling voters to participate in shaping their country’s future. 2024 was a year of many opportunities for democracy, with 74 national elections worldwide. Civil society organisations (CSOs) played an indispensable role in many of these, both in countries where elections were free and fair and in those that were more problematic. In Ghana, for instance, BudgIT Ghana, a CIVICUS Digital Democracy Initiative (DDI) partner, used X/Twitter to educate citizens about the voting process ahead of Election Day. Meanwhile, Fundación Efecto Cocuyo’s podcast series in Venezuela brought critical electoral issues into public discourse. They featured the National Electoral Council’s head, Aimée del Nogal, in one episode, although the election results were widely seen as fraudulent.

CSOs can play a vital role in promoting informed participation, transparency, and electoral integrity, even in challenging contexts. There is a need for flexible support to scale up such initiatives to promote inclusive and credible electoral processes globally.

As civil society entered the 2024 election cycle, the emphasis shifted not only to observing the polls but also to sustaining civic space beyond the elections. CSOs prepared for post-election scenarios, potential contestation, digital manipulation flare-ups and campaign-era legacy issues.

CIVICUS’s DDI supported civil society in tackling election-related challenges by enabling local organisations to counter disinformation, monitor elections with digital tools, and mobilise underrepresented communities for meaningful civic participation. Through strategic resourcing, training, and digital tools, we strengthened electoral integrity and protected democratic spaces.

Building monitoring coalitions and observation capacity

Many CSOs across the globe renewed or expanded election-monitoring networks to respond to the 2024 cycle. Domestic coalitions partnered with electoral commissions, social media platforms, and media organisations to develop rapid incident-reporting systems, scenario planning, and election-day observation protocols.

Case Study: South Africa
Ahead of the 2024 national and provincial elections, South Africa’s Indepedent Electoral Commission (IEC ) partnered with major social media companies, including Meta (Facebook, Instagram, WhatsApp), Google (YouTube), and TikTok to curb the spread of electoral disinformation. The partnerships aimed to promote credible election information, strengthen content moderation systems, and provide direct reporting channels for false or harmful content related to the elections. It also involved collaboration with fact-checking organisations and the Real411 platform to ensure timely identification and removal of misleading online material. The initiative formed part of the IEC’s broader commitment to maintaining electoral integrity and public trust in South Africa’s democratic processes.

Digital preparedness and deepfake disinformation mitigation

Given the rising threat of manipulated media and disinformation, CSOs prioritised digital literacy, early warning mechanisms, and public information campaigns. In many election-year contexts, digital platforms, civic tech groups, and media watchers supported civil society efforts to monitor social media trends and coordinate responses.

Case Study: India
India’s BOOM Live, BOOM, an independent digital journalism initiative launched a Deepfake tracker. The tracker combined investigative journalism with cutting-edge forensic analysis to expose and explain AI-generated deception. Using a blend of reverse image searches, frame-by-frame video analysis, and metadata forensics, BOOM’s team traced the digital fingerprints of manipulated media circulating across Indian social platforms. Each verified deepfake is catalogued with context: who shared it, how it spread, and what narrative it sought to push. Complementing the tracker, BOOM’s deepfake literacy campaign turned detection into public education; through workshops, explainers, and interactive videos, it taught citizens how to spot tell-tale visual inconsistencies and question too-perfect “evidence.” Together, these efforts transform technical verification into civic empowerment, helping audiences see not just what is fake, but why it matters.

Advocacy, civic education, and inclusive voter engagement

CSOs sought to amplify civic participation and safeguard the electoral process by targeting historically excluded groups, women, and youth. They ran outreach programmes to explain electoral rights, boost voter registration, and help citizens identify misinformation. In contexts of shrinking civic space, many organisations also developed contingency plans for legal and advocacy responses, from emergency observation statements to partnerships with international oversight bodies.

Case Study: USA
The League of Women Voters (LWV) in the United States launched a nationwide initiative ahead of the 2024 elections to counter disinformation aimed at women, young voters, and communities of colour—groups historically targeted with voter suppression efforts. Through its “Misinfo 101” campaign and partnerships with grassroots organisations such as Voto Latino and Black Voters Matter, the LWV trained local volunteers to identify and report misleading narratives about voting eligibility, mail-in ballots, and polling stations. The campaign also created multilingual, culturally tailored explainer videos and WhatsApp fact sheets to reach non-English-speaking communities. By combining digital monitoring with community-based voter education, the initiative helped inoculate traditionally excluded groups against coordinated disinformation designed to discourage or confuse them at the ballot box.

 

3.4. Lessons Learned from the 2024/25 Electoral Cycle

The 2024/25 election cycle offers numerous lessons. The first is simple: prepare, prepare, prepare. It is far better to over than underprepare.

Early warnings can sometimes cause alarm, but new technological threats demand thorough human impact assessments and awareness of both potential and real impacts. In 2024, what could be termed as alarmism led to public awareness, platform intervention, CSO readiness and regulatory action, which, collectively, may have helped reduce the impact of deepfake disinformation.

Another key lesson is that these assessments must happen before new technologies are widely adopted, not after. Reactive responses are always too late. Anticipatory strategies give societies the resilience to absorb shocks.

Lesson LearnedInsightPolicy / Strategic / Advocacy Interventions
Threats Were Overestimated
Only about 251 cases of deepfake disinformation were recorded globally, far fewer than expected.
Avoid alarmist narratives, ground policy in evidence, and prioritise proportional responses; conduct empirical studies to measure potential impact
Quality Over Quantity
Even limited incidents matter due to hyperrealism and emotional resonance.
Focus on detection, rapid response, and public awareness to neutralise “high-impact” deepfakes quickly
Context Shapes Impact
Deepfake effects varied depending on media systems, culture, and digital literacy.
Develop context-sensitive interventions (regional fact-checking, culturally tailored media literacy)
Unknown Actors & Attribution Gaps
46% of cases had untraceable origins; foreign/state-linked actors were implicated where identified.
Strengthen cross-border attribution mechanisms, invest in OSINT, and advocate for platform transparency
Constructive Uses Emerged
Parties also used GenAI for civic outreach and accessibility.
Encourage positive applications of GenAI in campaigns, while regulating manipulative uses
Resilience Stronger Than Expected
Public awareness, platform measures, and civil society monitoring helped blunt impacts.
Scale up media literacy, strengthen CSO monitoring networks, and embed election-specific preparedness
Regulation Uneven
Laws exist in the EU, Singapore, the USA, and other jurisdictions, but enforcement varies widely.
Push for harmonised global/regional standards; advocate for transparency obligations on platforms
Broader Harms Overlooked
Focus on elections risks sidelining the harms of deepfakes targeting women/minorities.
Expand definitions of “harmful deepfakes” beyond elections; push for gender-sensitive disinfo frameworks
Country Is GenAI making it easier to create realistic disinformation? (% of respondents who agree) Will GenAI make disinformation worse? (% of respondents who agree)
South Africa
77%
52%
Indonesia
89%
45%
Peru
82%
46%
Chile
81%
53%
Singapore
80%
46%
Thailand
78%
42%
Colombia
77%
58%
Argentina
77%
45%
Malaysia
76%
51%
Mexico
75%
42%
Brazil
74%
51%
India
66%
52%
Platform Preparedness Measures Limitations
Meta (Facebook, Instagram, WhatsApp)
Updated misinformation policies, partnered with fact-checkers, invested in AI to detect synthetic media, and launched “Made with AI” labels in 2023
Detection capacity remains limited; enforcement is inconsistent; and the system is heavily reliant on self-reporting
TikTok
Introduced new rules banning harmful AI-generated content, added content labels for synthetic media, and partnered with fact-checking organisations in some regions
Limited transparency on enforcement; region-specific coverage is uneven
X (formerly Twitter)
Committed to “community notes” for misleading AI-generated media; voluntary alignment with the EU’s Code of Practice on Disinformation
Scaled back trust & safety teams, enforcement is inconsistent, and collaboration with researchers has been reduced
YouTube
Announced it would require disclosure of altered or synthetic content and expanded misinformation policies to cover AI-generated deepfakes
Labels rely on uploader honesty, but their proactive detection capacity is limited.
Google (Search & Ads)
Banned political campaigns from using AI-generated content in ads without disclosure; improved detection for manipulated media
Enforcement is mainly ad-focused, not organic content; the impact on the broader ecosystem is unclear.
Snapchat
Rolled out AI content guidelines; partnered with third-party safety groups to track harmful synthetic media
Smaller scale than rivals; limited reporting on election-related enforcement
Country/Region Legislation/Action Taken Penalties & Enforcement Effects & Impact
European Union
The EU AI Act (2024) was passed and regulates high-risk AI, including deepfakes and election disinformation.
Fines up to €35M or 7% of global turnover for serious violations
Boosted transparency; inspired global policy models; platforms adapting rapidly
Ukraine
Proposed AI law on counter-disinformation; targets hybrid warfare and foreign interference
Criminal liability, content takedowns, and platform restrictions
Deter foreign manipulation; strengthen national security response
United States
State-level laws to regulate deepfakes in elections and impersonation
Civil/criminal penalties, fines, and imprisonment, depending on intent
Platforms preemptively restrict content, leading to growing pressure for federal action.
China
Deep Synthesis Regulation passed in 2023; tackles deepfakes by requiring all synthetic content to be clearly labelled and traceable. Providers must obtain explicit consent before editing personal attributes, and platforms are obligated to detect, disclose, and remove harmful or misleading AI-generated content.
Fines, platform bans, and criminal charges for harmful or political misuse
There has been an improvement in traceability, but critics warn of censorship risks under broad enforcement.
South Korea
Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness Protection law establishes national oversight and safety infrastructure to prevent harmful uses of AI, including deepfakes.
Expected fines and imprisonment; legislation is still pending implementation.
Raises public awareness; platforms preparing for stricter compliance
India
Draft AI Accountability and Ethical Use Bill;targets AI-generated disinformation by recommending mandatory labelling, licensing for content creators, and prosecution mechanisms for deepfake misuse.
Penalties are under development and are expected to include moderation mandates.
Sparked debate on free speech vs. disinformation control; draft law still evolving
Singapore
The Protection from Online Falsehoods and Manipulation Act (POFMA) requires correction notices or takedowns for AI-generated disinformation, including deepfakes, and empowers authorities to act swiftly against misleading content.
Up to SGD 100,000 for individuals and SGD 1 million for platforms or entities that fail to comply with directives
Strong deterrence; emphasis on public education and platform responsibility
Japan
Act on Promotion of Research and Development and Utilisation of Artificial Intelligence-Related Technologies (March 2025)
Encourages safe AI use and allows government oversight of AI misuse, including disinformation, though it does not impose specific penalties
Focus on innovation balance; critics call for stronger enforcement mechanisms.
Australia
Combatting Misinformation and Disinformation Bill 2024; empowers the Australian Communications and Media Authority (ACMA) to regulate harmful AI-generated content, including deepfakes, but was withdrawn due to concerns over free speech and lack of political support.
None currently
High public concern; legal uncertainty; platforms urged to self-regulate.
Country/Region Legislation/Action Taken Penalties & Enforcement Effects & Impact
European Union
The EU AI Act (2024) was passed and regulates high-risk AI, including deepfakes and election disinformation.
Fines up to €35M or 7% of global turnover for serious violations
Boosted transparency; inspired global policy models; platforms adapting rapidly
Ukraine
Proposed AI law on counter-disinformation; targets hybrid warfare and foreign interference
Criminal liability, content takedowns, and platform restrictions
Deter foreign manipulation; strengthen national security response
United States
State-level laws to regulate deepfakes in elections and impersonation
Civil/criminal penalties, fines, and imprisonment, depending on intent
Platforms preemptively restrict content, leading to growing pressure for federal action.
China
Deep Synthesis Regulation passed in 2023; tackles deepfakes by requiring all synthetic content to be clearly labelled and traceable. Providers must obtain explicit consent before editing personal attributes, and platforms are obligated to detect, disclose, and remove harmful or misleading AI-generated content.
Fines, platform bans, and criminal charges for harmful or political misuse
There has been an improvement in traceability, but critics warn of censorship risks under broad enforcement.
South Korea
Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness Protection law establishes national oversight and safety infrastructure to prevent harmful uses of AI, including deepfakes.
Expected fines and imprisonment; legislation is still pending implementation.
Raises public awareness; platforms preparing for stricter compliance
India
Draft AI Accountability and Ethical Use Bill;targets AI-generated disinformation by recommending mandatory labelling, licensing for content creators, and prosecution mechanisms for deepfake misuse.
Penalties are under development and are expected to include moderation mandates.
Sparked debate on free speech vs. disinformation control; draft law still evolving
Singapore
The Protection from Online Falsehoods and Manipulation Act (POFMA) requires correction notices or takedowns for AI-generated disinformation, including deepfakes, and empowers authorities to act swiftly against misleading content.
Up to SGD 100,000 for individuals and SGD 1 million for platforms or entities that fail to comply with directives
Strong deterrence; emphasis on public education and platform responsibility
Japan
Act on Promotion of Research and Development and Utilisation of Artificial Intelligence-Related Technologies (March 2025)
Encourages safe AI use and allows government oversight of AI misuse, including disinformation, though it does not impose specific penalties
Focus on innovation balance; critics call for stronger enforcement mechanisms.
Australia
Combatting Misinformation and Disinformation Bill 2024; empowers the Australian Communications and Media Authority (ACMA) to regulate harmful AI-generated content, including deepfakes, but was withdrawn due to concerns over free speech and lack of political support.
None currently
High public concern; legal uncertainty; platforms urged to self-regulate.
Lesson Learned Insight Policy / Strategic / Advocacy Interventions
Threats Were Overestimated
Only about 251 cases of deepfake disinformation were recorded globally, far fewer than expected.
Avoid alarmist narratives, ground policy in evidence, and prioritise proportional responses; conduct empirical studies to measure potential impact
Quality Over Quantity
Even limited incidents matter due to hyperrealism and emotional resonance.
Focus on detection, rapid response, and public awareness to neutralise “high-impact” deepfakes quickly
Context Shapes Impact
Deepfake effects varied depending on media systems, culture, and digital literacy.
Develop context-sensitive interventions (regional fact-checking, culturally tailored media literacy)
Unknown Actors & Attribution Gaps
46% of cases had untraceable origins; foreign/state-linked actors were implicated where identified.
Strengthen cross-border attribution mechanisms, invest in OSINT, and advocate for platform transparency
Constructive Uses Emerged
Parties also used GenAI for civic outreach and accessibility.
Encourage positive applications of GenAI in campaigns, while regulating manipulative uses
Resilience Stronger Than Expected
Public awareness, platform measures, and civil society monitoring helped blunt impacts.
Scale up media literacy, strengthen CSO monitoring networks, and embed election-specific preparedness
Regulation Uneven
Laws exist in the EU, Singapore, the USA, and other jurisdictions, but enforcement varies widely.
Push for harmonised global/regional standards; advocate for transparency obligations on platforms
Broader Harms Overlooked
Focus on elections risks sidelining the harms of deepfakes targeting women/minorities.
Expand definitions of “harmful deepfakes” beyond elections; push for gender-sensitive disinfo frameworks