2025 STATE OF CIVIL SOCIETY REPORT

TECHNOLOGY: HUMAN PERILS OF DIGITAL POWER

AI concerns

There’s growing awareness of AI’s climate and environmental impacts. The huge data centres needed to power AI consume vast amounts of electricity and water, and as AI expands, so does the problem. A single question to an AI chatbot can use 10 times more energy than a conventional search. As a result, AI growth is driving a boom in the construction of gas-powered electricity plants, even though these should be phased out to meet climate goals. Thanks to expansion of its data centres, Google’s greenhouse gas emissions leapt 48 per cent from 2019 to 2023; so much for its stated aim of becoming net zero by 2030.

The latest innovation, DeepSeek, could alleviate some of these concerns, since it appears not to need data centres on the same scale. But there are other questions about this Chinese development that caused a stir in January 2025, when it instantly became the most downloaded app in over 150 countries. Ask DeepSeek a question about Hong Kong, Taiwan or Tiananmen Square and it will either clam up or spew out Chinese state propaganda. It’s a development that does nothing to alleviate fears that AI will help manipulate opinions by spreading narratives based on lies.

US Border Patrol agents photograph people to capture biometric data for tracking purposes at a migrant processing centre in Arizona, USA, 7 December 2023. Photo by John Moore/Getty Images.

The power of disinformation, as seen in many of 2024’s elections, has been turbocharged by developments in generative AI – technologies that produce text, images and videos from prompts. These tools reflect the biases of their makers and the inputs they’ve been fed, and have simplified the creation of convincing fake photos and videos. Even when disinformation isn’t deliberate, generative AI has a habit of making up plausible sounding but factually incorrect answers. A recent study suggests around a fifth of BBC news obtained from AI chatbots contains factual errors.

Both generative AI and the quest for artificial general intelligence – which could replicate and surpass human ability to learn and understand – could bring benefits, but they also raise an array of concerns. It may sound alarmist to voice fears, but in December 2024 the scientist often considered the ‘godfather’ of AI, Nobel laureate Geoffrey Hinton, offered a chilling warning that there’s a 10 to 20 per cent chance of AI wiping out humanity within decades.

AI models currently in use have already brought concerns about impacts on jobs, copyright and intellectual property, given the human-generated material they’re trained on, and biased results that fuel further exclusion.

AI is also increasingly used in surveillance technologies, including the growing field of facial, emotional and biometric recognition. Here there are issues of bias, overreach and function creep, as technologies that may initially be used to combat terrorism, for example, become more widely deployed to undermine freedoms simply because they’re available. AI’s growing military use is another area of concern; in February 2025, Google’s parent company Alphabet dropped its promise not to use AI for weapons development or surveillance, a far cry from Google’s former don’t be evil’ motto.

It’s clear developments in AI are far outstripping the pace of regulation. But when the AI Action Summit was held in Paris in February 2025, while around 60 states endorsed a statement backing sustainable, open, transparent, ethical and safe AI, the USA and UK refused to sign, with the USA expressing concern about ‘excessive regulation’. One of the first things Trump did was rescind an executive order that established AI safeguards. The danger is of a growing regulatory gap.

Spyware is another issue that civil society is concerned about, and that affects civil society. Numerous states have used Pegasus spyware, supplied by Israel’s NSO Group, to spy on civil society, the media and the political opposition. The government of Jordan is the latest revealed to be using Pegasus, targeting at least 35 people as part of a sustained civic space crackdown.

There’s an urgent need to ban spyware, which the NSO Group only sells to states, and for a global moratorium on the development and sale of digital surveillance technologies until strong human rights safeguards are in place.

Tech leaders align with Trump

If only tech leaders could be trusted. But they’re increasingly showing they can’t. Silicon Valley’s billionaire entrepreneurs once tried to paint themselves as socially conscious, but their act has become increasingly difficult to maintain.

One troubling sign is the way they quickly lined upbehind the Trump administration, donating millions of dollars to his inauguration fund. Amazon, Google, Meta, Microsoft and Uber each gave US$1 million, and tech CEOs such as Apple’s Tim Cook and OpenAI’s Sam Altman chipped in too.

Tech leaders including Meta’s Mark Zuckerberg, Amazon’s Jeff Bezos, Google’s Sundar Pichai and Twitter/X’s Elon Musk attend Donald Trump’s inauguration in Washington DC, 20 January 2025. Photo by Julia Demaree Nikhinson/Pool via Reuters/Gallo Images.

At the very least, donations of this unusual magnitude signalled a determination to stay onside with a fractious president. They may also have indicated a desire to limit potential AI and cryptocurrency regulation and grab a greater share of defence spending.

But for Meta, owner of Facebook, Instagram, Threads and WhatsApp, the donation was just one of the ways it’s taken a pro-Trump-direction. In January 2025, the company announcedit was ditching its independent fact-checking programme in the USA. Zuckerberg claimed fact checking had led to excessive censorship and the move would promote free speech. Instead, Meta will adopt something similar to Twitter/X’s community notes system.

Zuckerberg’s conflation of fact-checking with censorship is disturbing, and there are many problems with Twitter/X’s alternative, including that most disinformation spreads before notes can correct it. Meta has already been accused of failing to prevent its platforms being used to spread hate speech that fuelled violence in India, Myanmar and recently in Ethiopia, while systematically censoring posts speaking out for Palestine. Under its changed policies, it’s now acceptable to accuse LGBTQI+ people of being mentally ill or refer to women as property. Trump welcomed the changes.

Meta also agreed to pay Trump US$25 million to settle a lawsuit he filed after the company suspended his accounts following the January 2021 insurrection and axed its DEI initiatives following the Trump administration’s attacks on them. Zuckerberg’s charity, the Chan Zuckerberg Initiative, ditched its DEI team too. Other tech companies have followed suit: Google dropped diversity hiring targets and announced it would no longer observe events like Black History Month and Pride Month, while Amazon removed the diversity and inclusion section of its annual report.

For some tech leaders, alignment with right-wing populism has come easily. They see themselves as exceptional people to whom normal rules don’t apply. They like to move fast and break things, to quote Facebook’s former motto. They’re suspicious of the state – unless, perhaps, they’re in charge. They see Trump as a kindred spirit. None more so than Elon Musk.

Musk makes his move

Generally reckoned to be the world’s richest man, Musk put his fortune at the service of getting Trump elected, appearing at his rallies, donating US$288 million and offering swing-state voters the chance to win US$1 million for signing a pro-Trump petition.

Musk consistently retweets extremist content. He has the most followers, and in 2023 he insisted on algorithm changes to make his content even more prominent. So whatever he touches has huge reach – particularly in the USA, where the platform has the most users, and among the young men who disproportionately use it. Buying Twitter/X may have been bad business – the company is believed to be worth a fraction of the US$44 billion he paid – but it was successful politics. A once relatively liberal platform is now a right-wing bastion. Many leftist voices have left, banned extremists have been allowed back and Musk intervenes constantly to steer the conversation.

The business leader no one voted for has inserted himself into the heart of Trump’s oligarchical operation, heading the pseudo government Department of Government Efficiency (DOGE) with the professed aim of achieving drastic public spending cuts, although the agenda is clearly more political than financial. Among federal bodies targeted are those perceived by the Trump camp to have a liberal bias, including the Department of Education, the National Oceanic and Atmospheric Administration, which provides climate data, and the US Agency for International Development (USAID), the world’s biggest aid agency.

The USAID spending freeze, imposed at Musk’s prompting in January 2025, caused instant chaos. Programmes that provide the world’s poorest and most vulnerable people with vital services like healthcare shut down. Refugees from Myanmar’s persecuted Rohingya minority, for example, were left without the most basic aid in camps in Bangladesh.

Civil society has been rocked. Many CSOs and independent media working in restricted civic space and conflict settings where domestic resources are lacking rely on USAID support. This includes many independent Ukrainian and exiled Russian media organisations, which have been left struggling.

If the cuts become permanent, the result will be a diminished civil society far less able to defend rights and hold the powerful to account. The fact that some of the world’s most authoritarian leaders welcomed the move said it all. This wasn’t a step anyone who cares about democracy and human rights – including freedom of speech – would take.

It isn’t just with the USAID freeze that Musk has had an impact beyond the USA’s borders. He’s repeatedly intervened in the UK’s politics, attacking Prime Minister Keir Starmer and posting and boosting far-right content. Following a series of riotssparked by anti-migrant and anti-Muslim disinformation in the wake of a horrific knife attack, he posted that ‘civil war is inevitable’ in the UK, shared disinformation from the leader of a far-right hate group and promoted the false claim that the UK’s criminal justice system treats Muslims more leniently.

Musk also intervened in Germany. Ahead of the February 2025 election, he hosted a 75-minute uncritical interview with AfD co-leader Alice Weidel and claimed that ‘only the AfD can save Germany’.

Striking back

Social media has power because so many use it, and people can choose which platforms they use and don’t use. Hundreds of thousands, and several CSOs and businesses, quit Twitter/X following Trump’s re-election. But the broader challenges were seen in the fact that many fled to Threads, only to face another choice when Zuckerberg introduced his Trump-friendly changes.

Poster installed by the activist collective Everyone Hates Elon Musk on a bus shelter in London, UK, 12 March 2025. Photo by Leon Neal/Getty Images.

As new platforms emerge, populist and nationalist politicians continue to do best on them. In 2024, TikTok, with its young demographic, was embraced by Trump, along with Subianto in Indonesia and Georgescu in Romania. In many elections, such as those in Germany and the USA, young men in particular are disproportionately backing right-wing populists, and it’s a trend partly influenced by the social media they’re relentlessly exposed to.

Much political debate takes place on platforms that exist to get eyeballs on advertising. Algorithms seek to keep users hooked by serving eye-catching, sensationalist content. This rewards simplistic and populist narratives over nuance and reasoned debate.

But as people become more selective about the platforms they use, the notion of being participants in a shared global town square recedes, replaced by the danger of retreat into smaller circles of confirmation bias. For some companies and creatives, switching from platforms with broad reach to others with less engagement isn’t feasible, while progressive voices might not want to cede territory to regressive forces. Completely quitting social media is hard because it’s addictive by design.

Pressure on advertisers is one response, given how important ad revenue is. When civil society research found that hate speech against Black US citizens on Twitter/X tripled after Musk’s takeover, publicity about its findings led to a fall of around US$100 million in ad revenue. There’s also evidence of a backlash against Musk in a sharp drop in Tesla cars sales in Europe.

It’s hard to see progress without proper and principled regulation. But there are huge dangers. When states introduce social media bans, it’s often to block criticism and scrutiny. In 2024, social media restrictions in countries including Bangladesh, Pakistan and Solomon Islands served precisely that function. In the USA, Trump suspended Biden’s TikTok ban, imposed on the grounds that its Chinese ownership made it a threat, but the condition may be that its owners sell its US arm to one of his supporters, a move that would replace concerns about Chinese state influence with those of pro-Trump bias.

Brazil however proved it’s possible to hold social media companies to account. Its supreme court banned Twitter/X after it repeatedly refused to comply with orders to moderate content from several accounts linked to an attempted far-right insurrection in January 2023. The court imposed heavy fines for non-compliance, but Twitter/X closed its Brazil offices. When the company failed to meet a deadline to appoint a legal representative, the court ordered its closure.

The move was controversial, with civil society particularly critical of an order for VPN services to block access. But whatever the rights and wrongs of the decision, the result was that despite much posturing, Musk backed down. He doesn’t have to get his way. It’s possible to strike a balance between freedom of expression and holding social media platforms accountable.

Regulation pitfalls

The global nature of the challenge demands an international response. Not much progress was made as part of the UN’s 2024 Summit of the Future on strengthening international cooperation, which adopted the Global Digital Compact. Civil society engaged extensively with the process but didn’t necessarily see its input on key human rights issues reflected in the resulting text. While the compact condemns surveillance and calls for privacy protections, it’s silent on the gendered aspects of online abuse and pulled its punches on internet shutdowns.

The 2024 Internet Governance Forum, supposedly intended to address the opportunities and risks posed by AI and other digital technologies, was held in Saudi Arabia, where the state frequently criminalises online expression. Photo by UN/Department of Economic and Social Affairs.

Not all internet regulation is good news. In August, the UN agreed a Cybercrime Convention. There’s no doubt people need protections from cybercrime, which is expected to cost over US$10 trillion this year. But it’s also the case that many states brand people as cybercriminals merely for speaking out. Numerous states have adopted broad and excessive cybercrime laws, and in 2024, authorities in countries including Indonesia, Jordan, Nigeria and Serbia used their heavy-handed laws to arrest and detain people, including for raising concerns about environmental issues, exposing corruption and expressing solidarity with Palestine. Now such repression could be presented as compliance with a global treaty.

Many in civil society questioned why the treaty, sponsored by Russia, was needed: existing agreements, particularly the Council of Europe’s Budapest Convention, which is only partially operational, would suffice if implemented. The challenge was one of damage control, with civil society engaging to demand human rights safeguards and advance a narrow definition of cybercrime that didn’t encompass online expression.

The final treaty, while better than the first draft, lacks clear, specific and enforceable human rights protections, leaving them up to domestic law. It gives wide scope for international cooperation in data collection and sharing, offering disturbing potential for the expansion of surveillance powers. Civil society is calling on states to consult before ratifying, and to ensure their response to cybercrime is consistent with respect for human rights.

The UN’s latest Internet Governance Forum, held in December, didn’t offer much of an opportunity for civil society to debate the treaty. Shockingly, it was hosted by authoritarian Saudi Arabia, and when civil society tried to highlight the fact that the host government is one of the worst offenders in jailing people for online expression, UN staff apparently intervened to remove critical content. At the very least, the UN must guarantee safe international spaces to discuss human rights and tech issues.

When it comes to AI, the most significant regulatory development in 2024 was the entry into force of the EU’s AI Act. Again, civil society engaged with the process, which led to some improvements, including limits on biometric identification and the inclusion of fundamental rights impact assessments. But again, there are strong concerns about insufficient human rights safeguards. AI systems are exempt from protections if they’re used for national security, loopholes allow some surveillance systems and migrants aren’t accorded have the same rights as EU citizens. The potential remains for the EU’s more repressive states, such as Hungary and Slovakia, to use AI against civil society, and for states to deploy it in a race to the bottom on migrants’ rights.

What’s clear is that tech leaders, whether Musk, Zuckerberg or those who take a lower profile, can’t be trusted to self-regulate, and regulation can’t be left to states either. In difficult times, it remains necessary to assert the centrality of human rights and push for global standards consistent with them. Civil society’s voices must urgently take centre stage in this crucial debate.