Categories
Constitutional Law Media Laws Technology

2023 Amendments to the IT Rules: Drawing a Curtain on Free Speech

Sonarekha Ray

Introduction

Comedian Kunal Kamra filed a petition before the Bombay High Court, flagging several legal flaws and challenging the constitutional validity of the amendments that were passed to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘IT Rules’) in April 2023. In response to the petition, the clarifications that the Indian government released hold the potential to curb free speech on the Indian internet landscape.

Rules 3(I)(b)(v) and Rule 3(I)(a) of the IT Rules, 2021 have been amended by the new Rules, which direct social media intermediaries to make “reasonable efforts” to ensure that their users do not “host, display, upload, modify, publish, transmit, store, update, or share any information” that is “identified as fake or false or misleading by a fact check unit of the central government in respect of any business of the central government”. This means that these rules mandate the intermediaries to block any such content about the government on their platforms that the government-based Fact Checking Unit (FCU) prohibits them from displaying. The affidavit in Kamra’s petition goes on to clarify that in case of any non-compliance from an intermediary, the result shall be the revocation of immunity given under Section 79 of the IT Act, which protects these intermediaries from third-party content that they may display on their platforms.

Section 69A of the IT Act 2000 empowers social media intermediaries to block content on the platforms, and the IT Blocking Rules 2009 provides for the procedure. Shreya Singhal was the landmark judgement where the Supreme Court justified the validity and constitutionality of Section 69A. Therefore, this piece shall gauge the constitutionality of the recently amended Rules on its yardstick.

Dilution of the Safe Harbour Provision

The clarificatory affidavit seeks to revoke Section 79, which offers immunity to the intermediaries. This form of immunity is referred to as the ‘Safe Harbour Protection’, which protects the intermediaries from any liability that may arise due to any third-party content on their platforms. The immunity is available, provided certain due diligence measures are taken per Rule 3 of the IT Rules. The 2023 amendments put a check on these immunities by diluting the Safe Harbour Protection. An analysis of the history of such dilution is relevant in order to indicate the context in which the present amendments operate.

Since the IT Rules 2021 were passed, the due diligence requirements under Rule 3 have undergone constant additions in terms of appointing a grievance officer, periodical publishing of compliance reports, breaking end-to-end encryption by allowing identification of the originator of the information, etc. Such additions have increased the onus of compliance on the intermediaries enlarging the burden of preserving their Safe Harbour Immunity. Further, the 2022 amendment has expanded the scope of compliance even more by mandating a new set of obligations, the most significant being the amendment to Rule 3(1)(b)(v), which obligated the intermediaries to make reasonable efforts to cause users, not to upload or share “misinformation” or “misleading information”, making them referees of free speech as well. 

While accountability in today’s digital age serves as the justification for these amendments, these additional obligations have rudimentarily altered the internet landscape in India. The architecture of the IT Rules has changed, so much so that the intermediaries are now increasingly prone to work in tandem with and uphold the government’s interest rather than being harbours of free speech in the digital space, especially when any kind of non-compliance leads to a dissipation of their Safe Harbour Protection. Evidence of the same is Twitter’s 100% compliance in take-down cases on the direction of the government.

In a similar manner, the 2023 amendments incentivise the intermediaries to simply comply with the government orders rather than losing their well-cherished immunity, with the government creating the impression that the nature of these amendments and the effect they create is quite innocuous. This is the context in which these amendments are most likely to operate, and thus, it is necessary that this is taken into account in an assessment of their constitutional validity.

Unavailability of Procedural Safeguards

The Supreme Court, in Shreya Singhal, upheld the constitutional validity of Section 69A, providing reasons, one of which is that there exist adequate procedural safeguards enumerated in the IT Blocking Rules regarding content blocking. Rule 8 requires the Government to “make all reasonable efforts to identify the person or intermediary who has hosted the information” and serve a notice to them seeking an explanation as to why the content should not be blocked. A written order which can be challenged in Court follows such notice. This is an internationally accepted procedure termed the ‘Notice and Takedown’ approach. This procedure also complies with the principles of natural justice and due process laid down by the Supreme Court in Maneka Gandhi as constitutionally vital when administrative actions have civil or constitutional consequences.

However, the amended Rule 3(1)(b)(v) fails to provide any procedural safeguards to the intermediaries of the content creators, as has been rendered in the affidavit. It strictly requires the intermediaries to block any content on the FCU’s direction and, if not, to pay the price in Court. There is no due procedure that makes a provision for any pre-decisional hearing of the intermediary or content creator’s views and provides no scope for defence. This is a digression from the widely acceptable Notice and Takedown approach.

The Indeterminate Scope of “False, Fake or Misleading”

In addition to the lack of procedural safeguards issue, the amended Rules also raise issues with respect to the dissemination of free speech. As mentioned earlier, the amended Rule 3(1)(b)(v) requires intermediaries to take down content which has been determined by the FCU as “false, fake or misleading” “in respect of any business of the Central Government”. Here, the freedom of speech issues can be projected into two parts – (i) ‘false, fake and misleading’ content, and (ii) ‘in respect of any business of the Central Government’. 

Dealing with the first part, the Court opined in Shreya Singhal that if any Court or government agency issues an order of blocking any content on social media platforms, their considerations “must strictly conform to the subject matters laid down in Article 19(2)”. Further, in Kaushal Kishor, the Supreme Court clarified that the eight grounds provided in Article 19(2) in regard to restrictions on free speech are exhaustive. Any restrictions imposed outside the purview of these eight grounds shall be invalid. “False, fake or misleading” do not qualify as one of the grounds for restricting free speech. They are too broad in their ambit, encompassing speech that is both lawful and unlawful. In Shreya Singhal, the Court observed that free speech can be restricted only if it can possibly create a situation of imminent violence. That is, only if there is an imminent relationship between speech and violence can the law be evoked. The jurisprudence surrounding the ‘imminent lawless action’ test is of relevance here. The test, as laid down in the case of Brandenburg v. Ohio, by the Supreme Court of the United States, advocates that any speech that incites “imminent lawless action” and that “is likely to produce such action” is not constitutionally protected. Several subsequent American judgements have relied upon this test. The European Court of Human Rights, in its decision of Karatas v. Turkey, held that no speech can be restricted that does not “actually incite or have the effect of producing imminent violence”. Restrictions on free speech based only on the pretext of imminent violence are now an established acumen across jurisdictions, and Article 19(2) corroborates such a school of thought in India, as well.

Hence, content that is ‘false’ or ‘misleading’ may be reasonably restricted only if it can ‘imminently cause violence’. However, a differentiation must be made to identify stand-up comedy content on the internet that may use intentional and misleading amplifications that remain within the boundaries of satire.

Due to these existing complications, many other jurisdictions have attempted to adequately define these terms while drafting policies regarding ‘false or misleading’ information on the internet landscape. They have incorporated factors like the intention behind expressing that information, its viral nature and the effect it may have in causing a backlash or undermining public trust. However, in India, even though the country’s existing jurisprudence demands such clarifications to nuanced terms, these 2023 amendments have failed to address the ambit of ‘false, fake or misleading’.

When we come to the second part of the free speech issues that may arise, the situation worsens. Since the government omitted to define the scope of the phrase ‘business of the Central Government’, it is a term that can be overtly misused and manipulated owing to the fact that it could encompass within its ambit almost every function of a modern welfare state. Further, the phrase is a blow at the heart of democratic discourse in India. In a welfare state, it is highly imperative that the laws and policies of the governments be subject to intense public scrutiny because they can and do affect the everyday lives of its citizens. Given the same, it is immaterial whether fake or misleading information flows in the public discourse because the government possesses ample avenues to provide explanations and clarifications on it, as and when required. The fundamental tenet of participatory democracy receives a blow when such discourse on the ‘business of the government’ is censored under the pretext of checking misinformation. The amendments are unnecessary and a  transgression of fundamental rights, especially considering the existence of less restrictive alternative methods, including flagging misinformation or increasing media literacy, which are more readily available and effective. It breaches the test of proportionality of action which requires the ‘least restrictive’ approach to be adopted.

Conclusion

Since social media intermediaries nowadays play a significant role in the digital space, they have come to acquire a lot of autonomy and power in terms of navigating the dissemination of information on the internet. The Court in the recent Delhi High Court judgement observed: “With great power, comes great responsibility”. However, the ‘great responsibility’ has not resulted in a ‘great regulatory legislation’. The IT Rules do not protect user autonomy but instead, use the Safe Harbour Protection as a leverage tool to increase the hold of the government on these intermediaries to work in line with the interests of the government. Since 2021,  amendments to the IT Rules have been challenged on multiple legal and constitutional grounds. and the 2023 amendments most explicitly manifest the government’s objective of exercising undue control over the free speech autonomy of Indian citizens. It is necessary that rigid objections to such misuse of power must see the light of day, especially when Indian citizens are increasingly exercising their right to free speech on social media.

The author is a student at National Law University, Odisha.

Leave a comment