
- Diana Nyakundi (Tech Policy Fellow- Lawyers Hub)
- April 19, 2022
- Tackling Misinformation, What's Amiss?
Globalization is a network of production, culture and power built on the innovative infrastructure brought about by a revolution in information and communication technologies. These communication technologies, such as the Internet, have created a pathway for new systems, such as social media. Increased connectivity and access to networks are some of the forces that have also driven globalization. In turn, these drivers of globalization have led to new forms of inclusion, exclusion, fragmentation and integration. For example, the digital space is currently blemished by misinformation, disinformation, hate speech, among other negative attributes.
To cleanse the online fora, both big tech companies and governments have endeavoured to take up measures to control the spread of misinformation in the online space. Although disinformation and misinformation mean different things, they are often used interchangeably. The primary difference is intent. Disinformation is a made-up story with a malicious intention to deceive, cause harm, and manipulate public perception. Misinformation, on the other hand, is the spread of misleading or false information inadvertently without the intent to deceive.
It is important to understand what drives misinformation and why individuals may believe and share it. These elements range from psychological factors such as motivated reasoning and strong emotional responses to particular stimuli, to more calculated political motivations. Financial incentives drive the creation of both scams and much misinformation that serves as clickbait. The operating systems of many social media platforms both promote often misleading material to which users show strong emotional responses and provide financial incentives that can drive misinformation.
In Kenya, the prevalence of fake news and misinformation was during the 2017 electioneering season through Whatsapp and Facebook. The misinformation was so much that Facebook rolled out an educational tool that was intended to help with the spotting of fake news to its users.Unfortunately, the spread of misinformation has progressed especially in the wake of Covid-19.
Nelson Kwaje, leading a digital team at #Defyhatenow, an organisation tackling online hate speech and now diffusing COVID-19 misinformation, called out false “remedies” such as boiling onions with lemon or taking tea without sugar. “Blacks don’t get coronavirus,” said one erroneous tweet seen by Reuters, which was posted by a user in Kenya with nearly 700,000 followers. “There is also misinformation related to government directives and public announcements. This could be as simple as people not understanding it, or misinterpretation of the directive,” said Kwaje, citing those who confused drinking alcohol, which does not protect against the virus, with using alcohol-based hand sanitiser, some types of which could be an alternative to hand-washing with soap and water. If so, how can we address misinformation? In this blog post, I contribute towards the discussion on how to tackle misinformation.
Tackling misinformation
Numerous recent media articles tend to indicate that the strategy of major platforms ‘has never been to manage the problem of dangerous content, but rather to manage the public’s perception of the problem. Yet, as argued by Rosenstiel, ‘Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to.’(See Tom Rosenstiel, The Future of Truth and Misinformation). Nevertheless, various strategies have been adopted to tackle misinformation such as deplatforming and quaranting.
Deplatforming
The most common mechanism adopted by social media networks has been deplatforming. Generally, deplatforming is the mechanism currently used by social networks and technology companies to suspend or ban users who’ve allegedly violated their terms of service. In particular, hutting down accounts helps prevent average unsuspecting users from being exposed to dangerous content. Unfortunately. it doesn’t necessarily stop those who already endorse that content.
Deplatforming has however been questioned as to its effectiveness, raising critical questions.. Does deplatforming really cleanse the online space? Or is it a form of punishment that ensures the deplatformed user would not cause harm’? Does it mean that once a user’s account has been suspended, the post that led to their suspension cannot be seen or found anywhere? To understand this, it is important to interrogate what happens to a user’s online activity once their account has been suspended.
What follows after deplatforming is platform migration making the whole menace someone else’s problem. The user moves to a platform with more lax regulations. These alternative platforms become havens for these users. Most of the accounts on these alternative platforms are created after suspension, and users become more active with most of their posts being complaints about unfair suspension and the limitation of their freedom of speech. While the intention is to control hateful speech and misinformation, these users tend to get a wider audience in the alternative platforms who are much more curious about what exactly got them suspended, engage them more and sometimes even end up supporting their hateful stands.
Additionally, questions arise as to whether the process of deplatforming in itself is fair. Sometimes, this process is done unfairly and arbitrarily. For example, there have been instances where a suspended user has been left confused with unanswered questions as to what exactly they posted online to warrant the suspension. This further aggrieves such an individual causing them to become more toxic especially in the alternative platforms.
Thus, digital platforms should explain this process clearly to all users, especially to users who file a complaint. Send precise information to the complainants on any follow-up procedures, enforcement action and the reasoning behind the action taken. Explain to users why their content has been restricted, limited or removed; or why the account or profile has been suspended, blocked or deleted. Notifications should include, at least, the specific clause of the community rules that the user allegedly violated. Moreover, it should be detailed enough to allow the user to specifically identify the restricted content and include information on how the content or account was detected, evaluated and deleted or restricted. Users should also be provided with clear information on how to appeal the decision.
The Legal Framework Governing Misinformation in Kenya.
Looking into the Kenyan lens, Section 66 of the Penal Code criminalises the publication of false statements, rumours or reports which are likely to cause fear and alarm to the public or to disturb the public peace. It is unclear how to determine whether a statement is “false” or the scope of something that is “likely to cause fear and alarm to the public or to disturb the public peace”. Section 66 thus fails to provide sufficient guidance for individuals and gives an overly wide degree of discretion to those charged with the enforcement of this law.
Section 22 of the Computer Misuse and Cybercrimes Act,2018 also criminalises “false publications” and Section 23 criminalises the “publication of false information”. It is hazy how to determine what is considered “false” or the scope of something which “is calculated or results in panic, chaos, or violence among citizens of the Republic, or which is likely to discredit the reputation of a person”.
While these existing legislations have a chilling effect on media freedom and public debate, they miss the declared target of reducing the harm caused by false information, failing to address the harm misinformation causes in an effective or proportionate manner or on an effective scale.For example, probing the arrests of Robert Alai in March 2020 for his post on social media accusing the Kenyan government of concealing information about the extent of COVID-19 in the country and Elijah Muthui Kitonyo for allegedly posting a tweet from a fake account stating that the Kenyan authorities lied about the first confirmed case in Kenya coming from the USA via London, these reports do not indicate that the posts were likely to harm the government’s response to the pandemic.a clear depiction that these laws penalise publication of information declared ‘false’ regardless of whether or not it caused or risked harm. Thus, there is no link between false information and harm.
Moreover, to restrict the flow of what they consider potentially unhelpful information ahead of elections and at times of feared unrest, governments across Africa have turned frequently in recent years to shutting down the Internet. Tanzania restricted access to the internet and social media applications during elections in October 2020. In June that year, Ethiopia imposed an internet shutdown which lasted for close to a month after unrest which followed the killing of a prominent Oromo singer and activist Hachalu Hundessa.
Zimbabwe, Togo, Burundi, Chad, Mali and Guinea also restricted access to the internet or social media applications at some point in 2020. The effect of this is that people don’t feel “secure” or safe when they can’t figure out what’s going on, can’t get access to important news or reach emergency services. Controlling the flow of information is an authoritarian tactic. It is a clear extension of an age-old pattern of repression into a new technological sphere.
Deplatforming is an ineffective strategy and inherently reactive as the harm has already been inflicted. It does not focus on correcting the false information or improving access to accurate information. There have also been debates that deplatforming allows for tech companies to be arbitrators of free speech and clearly highlights a failure by these platforms to encourage healthy debates. It denies the extremists the opportunity to be challenged and possibly shift their minds to a different perspective.
The inconsistencies in platform policies beg for collaboration among these platforms and other stakeholders to streamline the regulations as to what qualifies as misinformation, what the procedures and process would be to suspend a user because of misinformation and finally ensure that the stipulated policies cut across all platforms. That way, migration of misinformation from one platform to another will cease to be a reality.
Quarantining
The deployment of quarantining in these cases would mean that if a given post was automatically identified as constituting misinformation, the recipient would receive an alert. The recipient could then decide whether or not to read the post after seeing who has written it and after being informed that it has been specifically flagged up as potentially constituting misinformation, the recipient could also receive an indication of the degree of severity of the post. In this framework, misinformation is treated like a form of malware, and while the senders of such are not censored in a crude unilateral manner, the recipients of the false information are given the agency to determine how they wish to handle it. This approach potentially preserves freedom of expression, but the harm caused by misinformation is still controlled in a safe fashion by those most directly affected. While senders are still free to write whatever they wish, the recipients have the opportunity to decide which kinds of messages they wish to receive thus providing a balance between freedom of expression and appropriate censorship.
Conclusion
Repealing or amending legislation that penalises the publication or broadcast by traditional (TV, radio, print and online) news media of information on grounds of its accuracy or falsity to ensure that any penalties are both (i) proportionate and (ii) only applied where publication or broadcast of that information can be proven according to publicly set out criteria to have caused substantial harm or plausibly risked imminent. Additionally, governments should increase efforts to improve access to accurate information. Examples include setting up an independent watchdog of the quality of official statistics and an independent body to ensure public access to that data.
Another approach is enabling the growth of independent fact-checking organisations seen across the continent in recent years.Social media companies could use a blend of deprioritizing engagement, partnering with news organizations, and AI and crowdsourced misinformation detection. These approaches are unlikely to work in isolation and will need to be designed to work together.
These alternative approaches can reduce the harm that misinformation causes, without reducing free speech. Meanwhile, users should not remain reliant on the benevolence of tech platforms to do just enough about misinformation to satisfy the government of the day. We should be careful about surrendering power to both platforms and governments.