By Linda Apollo

The Kenyan government has tried to curb the spread of false or inaccurate information through regulation. But outlawing disinformation alone will not address the spread of fake news. It has become relatively common for public entities and politicians in Kenya to disavow certain content shared through social media, claiming it to be false. This year alone, the Independent Electoral and Boundaries Commission (IEBC) has had to issue two statements dismissing election-related content as fabricated.

In July 2021, it was reported through mainstream media that the Directorate of Criminal Investigations (DCI) had arrested an individual involved in online fraud. According to the report, the police suspected that the individual hacked into the IEBC’s database and accessed personal details relating to 61,617 registered voters. Shortly after the news broke, the IEBC claimed it was false. Even more recently, in September, the IEBC had to issue a public statement clarifying that a call for applications for jobs in their voter education program was fake. These instances have not taken place in isolation; there is a broader discernible upward trend in false or inaccurate content in Kenya.

In a 2018 survey of 2,000 Kenyans by Portland Communications, 90 per cent believed they had interacted with false information relating to the 2017 Kenyan elections, while 87 per cent believed that such content was deliberately false. The issue of deliberately spreading false information was the subject of Odanga Madung and Brian Obilo’s research for the Mozilla Foundation. In their report, they highlighted the extent to which the spread of disinformation in Kenya through Twitter was coordinated and well organized. These local developments also occur against a backdrop of global trends with striking similarities. Several governments—including Kenya’s have attempted to rein in the spread of false or inaccurate information through regulation.

In 2018 Kenya enacted the Computer Misuse and Cybercrimes Act (CMCA) which criminalizes sharing false information. Noting that some of these countries have directly linked their regulatory objectives to the safeguarding of their democracy, it is worth exploring the ways in which false or inaccurate content compromises democracies, and in particular, electoral integrity. To a post-truth world Undoubtedly, the ability to agree on basic facts is a core tenet of democracy. To optimally make a collective decision, voters ought to have access to the same accurate information. While arriving at a single “objective truth” is not always possible due to mediation in communication, it is important for the citizens to at least have access to, and acknowledge the basic facts that underpin the political processes they are participating in.

The increasing spread of false or inaccurate content in recent years points to the solidifying of a post-truth age where political rhetoric often appeals to emotion and sentiment with little regard to factual rebuttals. This post-truth concept is not entirely novel. For example, climate change denial and anti-vaxxer sentiments have long persisted despite the widespread availability of evidence to refute them. But in recent years, it has gained significant popularity perhaps due to the increasingly populist nature of political campaigning in the digital age. In the same year, Donald Trump won the US elections, Oxford English Dictionary’s word of the year was “post-truth”.

In his campaign, Trump made it a habit of dismissing mainstream news reporting as “fake news” when it contradicted his narrative and even went ahead to later falsely claims he coined the term. Likely emboldened by these trends, other populist leaders around the world began dismissing news reporting as-fabricated when it did not suit their narratives. Rather dangerously, some leaders who sound the alarm over false or inaccurate content are often either linked to the deliberate spread of such content or have benefited from it. Trump’s campaign was boosted by a group of Macedonian teenagers who, driven by advertising revenue on Facebook, generated several seemingly genuine news articles that either directly supported Trump or discredited his opponent, Hillary Clinton.

The combination of these leaders casting aspersions as to the integrity of traditional media and the spread of “alternative facts” on social media results in a political environment where voters are highly distrustful of each other and of core institutions such as the media. The danger is exacerbated by the nature of social media and how third parties often with the aid of social media platforms can curate the type of content users are exposed to in a subtle manner. Distrusting institutions is not the only risk to democracies.

The terms “fake news”, “disinformation”, and “misinformation”’ have featured prominently in the discourse on the spread of false content. While these terms are generally used to assert that something is untrue, they are sometimes wrongly used. This conflation then impairs any attempts at regulation. The term “fake news” does not necessarily refer to one specific type of content. Claire Wardle of First Draft has rightly noted that it is an entire ecosystem that includes both misinformation and disinformation. Elsewhere, one of us has categorized the nature of this content into two conceptions for purposes of understanding how to regulate it: the deliberate action and the culture around it. The deliberate action essentially refers to disinformation.

Spreading disinformation is the act of intentionally and knowingly sharing false or inaccurate content. In Kenya, Madung and Obilo identified groups of bloggers who were paid to push trends with false content that maligned certain political actors such as those who filed a petition to oppose the Building Bridges Initiative. These disinformation campaigns are often well-coordinated and targeted at a particular outcome. Due to the potency of such campaigns in electoral contexts, they have previously been referred to as “distributed-denial-of-democracy attacks”. These disinformation campaigns are often successful because of the second categorization, the culture of misinformation.

Misinformation can range from misleading or alarmist headlines to demonstrably false claims passed on by people who had a good faith belief in the accuracy of those claims. For example, a few years ago, the Kenya Bureau of Standards had to issue a statement denying the existence of “plastic rice” in Kenya following the circulation of a video on WhatsApp implying there was. WhatsApp is a particularly notorious avenue through which misinformation is shared locally. Even mainstream media is sometimes susceptible to sharing misinformation.

Unlike coordinated disinformation campaigns, which may often be linked to a central source, misinformation entails the public playing an active role in both creating and amplifying narratives. Renée DiResta refers to misinformation as amplified propaganda.  This culture of misinformation has been enabled by several things. First, the use of social media as a source of news content has led to a decline in gatekeeping or fact-checking of content. Second, the nature of social media is such that it amplifies one’s biases and exposes them to content that often confirms their worldview. This in turn results in their likelihood to consume false or inaccurate content unthinkingly. Lastly, the existence of disinformation campaigns and the discrediting of claims as false by politicians further muddies the waters, making people unsure of what is “objectively true”.

This has made addressing the problem of fake news difficult. Regulating truth Conceivably due to a focus on disinformation, regulation seeking to rein in false or inaccurate content has often been quick to criminalize the spread of fake news. As it was noted in the Kofi Annan Commission on Elections and Democracy in the Digital Age (KACEDDA) report, there is insufficient data regarding the individuals, motives and means behind the spread of fake news, possibly hampering regulatory efforts.

Where platforms are at risk of incurring liability for user conduct, they are more likely to pre-emptively censor content they deem problematic. The net effect of fake news laws aimed at platforms would therefore be the suppression of protected speech in an unprocedural manner by private entities. Empowering the citizens to both identify accurate sources of information and understand the role of different institutions in a democracy would contribute significantly to stemming the inadvertent spread or consumption of misinformation.

This, coupled with collaborative fact-checking initiatives between the government and mainstream media, would enable voters to discern fact from falsehood. To address both the disinformation and the broader culture of misinformation that enables it, one must go beyond such regulation. Considering the centrality of social media to everyday news consumption, it would also be prudent to engage these platforms in such fact-checking initiatives.

In recognition that fact-checking may occur late after false information is shared, it is also worth mainstream media exploring pre-bunking initiatives. These would involve identifying the common tropes around false narratives and priming audiences to receive them critically. Such efforts have been proposed as solutions to the current spread of misinformation around COVID vaccines. Even where pre-bunking efforts are not adopted, entities involved in the fact-checking initiatives proposed above may collaboratively engage in a debunking campaign by developing counter-messaging once misinformation is disseminated.

While it is indeed necessary to curb the spread of false or inaccurate content, attempting to do so may pose several risks. Governments, social media platforms, and mainstream media ought to collaborate and make use of a few legal and policy-based initiatives to stem the culture of misinformation. The IEBC, fortunately, has several examples to draw from on how, as an electoral body, it can coordinate efforts around addressing the culture of misinformation around elections. In all, when seeking to curb the spread of fake news (both deliberate and inadvertent), it is important for governments to consider why their citizens are susceptible to false information as opposed to how and by whom that information is spread.

Cover Image courtesy of Thomas Ulrich from pixabay