We want your
feedback

How Does One Control Hate Speech and Fake News, without Curbing the Freedom of Expression?

T.R. Raghunandan

6 December 2020

The question of controlling fake news and hate speech without disrupting freedom of expression bears repetition, but that does not get us any closer to the answers.

Before we come to finding ways of controlling these, the place to start for a policymaker is to define what constitutes hate speech and fake news.

With respect to ‘Hate Speech,’ the UN prepared a strategy and action plan on it in May 2019. It is significant that the UN website (on which this document is put out in the public domain) is on genocide prevention. Clearly, the linkage between hate speech, incitement and eventually, violence has been recognised emphatically. In this document, ‘Hate Speech’ is defined as:

 “…any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”

Fake news is defined rather loosely by Wikipedia, as good a source to check as any other, when the dimensions and implications of such news is rapidly changing. It defines fake news crisply, as ‘false or misleading information presented as news’. However, the term has been widely used to describe all kinds of information, including information that is true, but is critical of someone. The outgoing US President frequently used the term to describe news that is unkind to him as fake. Journalists are thus moving to dispense with the term altogether, because its real meaning has been diluted due to its rampant misuse. 

One such journalist-researcher – Claire Wardle – attempted to deconstruct the concept of fake news and has identified seven categories: 

  • Satire or parody (“no intention to cause harm but has the potential to fool”);
  • False connection (“when headlines, visuals or captions don’t support the content”);
  • Misleading content (“misleading use of information to frame an issue or an individual”);
  • False context (“when genuine content is shared with false contextual information”);
  • Impostor content (“when genuine sources are impersonated” with false, made-up sources”);
  • Manipulated content (“when genuine information or imagery is manipulated to deceive”, as with a “doctored” photo);
  • Fabricated content (“new content is 100% false, designed to deceive and do harm”).

Following the indiscriminate use of the term, Claire rejected the phrase as being “woefully inadequate” to describe issues. Instead, she said, there were three problems that needed to be tackled. These were: 

  • Mis-information, which is false information disseminated without harmful intent.
  • Disinformation, which is created and shared by people with harmful intent, and;
  • Mal-information, which is the sharing of “genuine” information with the intent to cause harm.

Clearly, what constitutes hate news and fake news is itself evolving. Yet, enough thought seems to have gone into this subject to provide policymakers with a core idea of what these concepts mean. This is a good place to start controlling or curbing their spread.

That brings us to the next question: how exactly does one contain the spread of both hate and fake news? If one were in a totalitarian state, the answer would be an easy one – come down on it hard, and ban, remove and criminalise related actions. However, in totalitarian states, the task of identifying what constitutes such communication or content is determined by bodies that consider themselves to be above criticism. It is a no brainer then that such governments would immediately use the handle of controlling hate speech and fake news to crush genuine political dissent and criticism of its own performance. That has happened in China, which has consistently used the phenomenon of fake and hate news as a justification for greater control over the internet.

Are we looking towards a time when control over the internet is the only sole effective remedy? 

Ethical codes of conduct have had value in some countries, where self-imposed regulations by media watchdogs and media associations, have resulted in a modicum of self-regulation. However, such approaches are not foolproof; they in any case would not apply to privateers, such as social media users, from posting hate and disseminating fake news.

Where ethical conduct of the media industry is especially weak, such codes of conduct do not work at all. In India, the Press Council of India set up a Committee in 2011 to investigate the phenomenon of ‘paid news’. The report was damning, it listed out several mainstream newspaper and magazine companies as encouraging the publishing of fake news. Even though the report named and shamed several publications, the Press Council was unable to impose its will on these media channels and houses. 

In the absence of public criticism, the perpetrators of fake news continued their despicable acts rather merrily. Another sting operation conducted on media channels a few years later revealed that the problem had only increased. Media channels, including some named and shamed in the 2011 report, were seen to be willing to accept money to push a particular point of view, shame certain political leaders, extoll a particular religion to the detriment of another, and write reports with a bias towards a particular political party.

So, if self-regulation is not the answer, particularly in India where self-regulatory institutions are weak, what could be the solution? Could it be legal intervention? If so, will that diminish the problem, or only introduce another vacillating player into the arena – the judiciary – which again will need to apply the yardstick of what constitutes hate or fake news to each and every case, on a case by case basis? 

That brings me to the same question as I asked last time around. What happens if the act of dissemination is done by an algorithm, which has a bias towards amplification of hate and fake news, rather than the other way around?

Are we looking towards a time when control over the internet is the only sole effective remedy? And if we adopt such harsh measures, do we not forsake the freedom of speech itself?

T.R. Raghunandan is an Advisor at the Accountability Initiative.

Add new comment

Your email address will not be published. Required fields are marked *