The warning signs are there, but are the social media platforms doing enough to prevent disruption of our elections? They are so secretive that we don’t really know. ANTON HARBER poses some questions for them
The World Economic Forum’s Global Risks Report 2024 ranked Artificial Intelligence-derived misinformation and disinformation as the biggest global risk in this election year, ahead of climate change, war and economic weakness. Our own Independent Electoral Commission (IEC) has said: “The burgeoning use of digital media in recent years has seen a corresponding surge in digital disinformation, particularly on social media platforms … Left unchecked, this phenomenon stands to undermine the conduct of credible elections.”
As one IEC commissioner, Janet Love, put it: “Digital media has the potential to be an asset in the promotion of democracy, transparency and informed decision-making that should underpin elections as it provides platforms for rapid and wide sharing of information. But it also comes with significant risks and we have seen disinformation posing a very real threat to free and fair elections elsewhere in the world.”
This sums up the conundrum for free speech and human rights advocates. We want to maximise the flow of information and the access of all to these global platforms, and harness their potential for good, but have to recognize the dangers they present.
Just this week, the Association of African Electoral Authorities warned that “disinformation and other potential digital harms … have undermined efforts to promote peaceful and democratic elections”. They called for digital and social media platforms to be transparent and accountable for their measures to prevent election disruption.
We saw foreign interference in the 2016 US elections via social media and in the UK’s Brexit vote. In a 2018 paper, the Carnegie Foundation studied five cases of Russian attempts to actively interfere in European elections in the previous 12 months: Netherlands, France , the UK and Germany in 2017, and Sweden in 2018. Only Germany – which previously had detected serious interference – found little interference in this round.
This is back a few years. How much greater is the capacity to mislead with the power of Artificial Intelligence (AI)? Just this week, many were taken in by a fake AI video which portrayed Elon Musk promoting a shady investment.
The election interference is often done by proxies, some of them local, and it can take many forms: hacking into the servers to gather and leak harmful information on parties they want to undermine; promoting polarising views online; using fake accounts to spread messages that boost their allies and undermine their foes, such as a fear of migrancy; and personal attacks on key leaders. The purpose is sometimes to promote favoured candidates friendly to them, like Marine Le Pen in France or Donald Trump in the US, and sometimes it is enough to promote chaos, distrust and social division.
In most cases, thought, harm was minimised because of cautionary government action to counter the interference and bolster the balloting systems and security. Carnegie concluded that prevention and control was most effective when the government, election officials, political parties and the media worked together to counter such effects. And it was helped in those countries – like France – which still had a relatively robust and trusted mainstream media.
Russia is often the culprit, and we know it has factional interests in our politics, but it is not the only one that gets up to mischief. Election organisers have to be vigilant about possible attacks from all sides, both local and international.
The attitudes of the social media platforms vary. Elon Musk’s X (formerly Twitter) has fired most of the staff who monitored and dealt with disinformation and disruption on their platform. He did so in the name of free speech, but it has led to an increase in hate speech and disinformation on X, hardly a victory for open and free exchange. Meta (Facebook) at first denied its nefarious role in the US elections, then acknowledged it and pledged to fix the problem. It has shown a reluctance to do more than the minimum, and only taken action under public and political pressure. Meta first said that 10 million Americans had seen election adverts in 2016 placed by a Russian agency to foment division, then upped that to 100 million viewers. They later released 3 500 such adverts, indicating the scale of the Russian campaign – and how much profit they must have made off it.
Facing a huge outcry, Meta set up a war room to monitor this for the next election, closed fake accounts and curbed political advertising at election times. Even then, they closed this down after the poll, thus helping insurrectionists to build unchecked momentum to storm the US congress.
Most recently, they have said they will not promote political or election posts. But this stops useful information as well as disinformation, a blunt weapon that does as much harm as good.
The crucial thing, though, is that we have to take the platforms’ word for what they do, as they tell us little of what steps they actually take, particularly in an African context.
To try and understand if our own vote is safe, the Campaign for Free Expression has written to four of the major platforms with a series of questions about what they are doing – and not doing – around our ballot. What preparations are they undertaking? Who are they consulting? What is their assessment of our risks? If they are monitoring for key words, are they monitoring in languages other than English? What steps are being taken to protect journalists and political figures from online harassment? How have they identified vulnerable targets? What verification will they conduct for election advertising? What measures will they take when they detect improper interference? Will they inform the public and/or the authorities? And so on – all vital information for knowing how safe we are.
If these platforms are genuine about protecting the integrity of our elections, and want our trust, they will answer these questions. If not, it will strengthen the argument for regulatory and legal intervention.
*Harber is executive director of the Campaign for Free Expression. To see the full list of our questions to the tech companies, see www.freeexpression.org.za
……
Read More