Categories
Technology

From “Artificial” to “Synthetic”: Decoding the intentionality behind this shift in the New IT Intermediaries Amendment Rules, 2026

Lauren Prem

I. Introduction

The increasing use of Artificial Intelligence (‘AI’) and its impact on daily lives through social media integration, have posed difficulties in distinguishing between AI-generated content and reality-based content on social media platforms. This pervasive use has called for some form of regulation of AI-generated content to prevent misuse of the same. Pursuant to this goal, the Ministry of Electronics and Information Technology (‘MeitY’), on 10th February 2026, notified amendments to the Information Technology Rules, 2021, which will come into force from February 20, 2026 ( ‘new IT amendment rules’).

The key term in the new IT amendment rules is “synthetically generated information” which is said to include audio, visual, or audio-visual information that is “artificially or algorithmically created, generated, modified or altered” such that it appears authentic and indistinguishable from real persons or events, subject to enumerated carve-outs. This definition has sparked discussions amongst academicians for its deviation from the more generally used, globally pervasive term “artificially generated information.”

This blog seeks to examine: (i) the rationale of the legislature for inventing a new term “synthetically generated information; and (ii) the possible legal implications arising out of the use of such a term.  In pursuit of this, the blog explores similar definitions under similar international regulatory frameworks and how such information is treated, defined and regulated under them.

II. Meaning of “artificial” and “synthetic”

The deviation by the key term“synthetically generated information” in the new IT amendment rules, calls for a deeper look into the meaning of the terms “artificial” and “synthetic.” The generally accepted statutory principle that the legislature uses every term intentionally and not as a random consequence of synonymousness, pushes for a deeper scrutiny of such usage.

As per a simple English dictionary connotation, the terms artificial and synthetic seem to be interchangeable. Cambridge dictionary defines synthetic as “false, artificial or made artificially” and synthetic products as “products made from artificial substances, often copying a natural product.” On the other hand, the term artificial is defined as “made by people, often as a copying of something natural.” The same definitional exercise, carried out from a scientific or technical perspective has also not proven to be fruitful as they produce synonymous results. These synonymous results do not help decode the legislative intent in this scenario. Thus, a different angle to this decoding exercise is required and it is wise to analyse similar statutory frameworks across the globe.

III. Digging Deeper into the Reason for Shift in terms

Rapidly evolving spheres are usually regulated through importation from other jurisdictions, rather than legislating from scratch. This is for the very logical reason that an existing framework is continually tested through judicial challenge and thus, gains strength over a period of time.

A. EU’s treatment of similar terms

The EU AI Act uses the term “artificially generated” and “artificially manipulated” repeatedly throughout its provisions, as opposed to “synthetically generated” or any other similar term. The intended use of these terms is, for the purpose of, broadening the scope of content to include “artificially generated or artificially manipulated content.” There is, however, no source to rationalize why the term “artificial” has been used in place of “synthetic” or other similar terms.

The EU AI Act is a comprehensive framework for regulating AI and its implications for the creation of synthetic content. As defined in Article 50, providers of AI systems that generate audio, image, video, and text by means of synthetic or deepfake technology are subject to certain transparency obligations. While the Act does not provide an explicit legal definition for ‘synthetic’ as it relates to ‘artificially generated content, it uses the term “synthetic” to reference data created through a training process performed on real-world datasets and statistically representative output. The absence of a clearly delineated distinction between ‘synthetic’ vs. ‘artificially generated’ content in EU law provides meaningful guidance for the development of similar laws in India. In light of the EU’s ongoing struggle to constructively define its own internal standard for establishing a legally-defined boundary between ‘synthetically’ and ‘artificially’ created content, the usage by the IT Intermediaries Amendment Rules 2026 (“synthetically generated” vs. “artificially generated”) suggests there is no alignment between Indian and EU standards. Thus, it shows that further work is needed to clarify what these respective classifications and standards mean with regard to the regulation of data.

B. Interpretational burdens due to new term introduction

The interpretive task of interpreting the law without legislative history or an explanatory memorandum is left to adjudicating authorities, such as courts and governmental authorities in India. It will be up to Indian courts and the Ministry to develop the analytical distinction between origin-based and effect-based definitions of synthetic content using first principles. 

The other frameworks which define artificially generated content are bye-laws of universities that do not pertain to the purpose of regulating artificially generated content in the context of intermediaries. An analysis of parallel frameworks suggest that the legislature has completely coined a new term to regulate artificially generated content. The legislature is equipped to do so, as legal fiction is a permissible power to efficiently regulate. However, the question arises as to why the legislature chose not to borrow from an already evolving jurisprudence and instead opted to start from square one.

IV. Plausible reason for coining a new term

A. Linguistic connotations of “artificial” and “synthetic” and loss of epistemic trust

The use of “synthetic” often implies manufactured objects. While artificial objects are also non-natural, synthetic indicates they were created through a conscious process of combining, modelling or algorithmic assembly. In terms of technological discourse, synthetic media and synthetic data refer to output that has been computer-generated in a fashion that replicates actual events with very high fidelity. Hence, the Indian legislature intended to include a more limited class of hyper-realistic manufactured content, such as deepfakes and realistic simulations, rather than every type of AI-generated content. The scope is narrowed down in order to avoid confusions that might arise in interpretation.

Importantly, this assertion is reinforced by the structure of the definition in the amended IT rules. Not only does the definition refer to the creation of artificial content, but also states that such content must appear to be real, authentic and/or true, and indistinguishable from a real person or real-world event. Therefore, the legislative issue that the new IT rules are intended to address seems to revolve around deceiving individuals through the use of realistic representations. Accordingly, this means that the legislature’s concern is not solely on artificial intelligence itself but rather on the product of using artificial intelligence, specifically, the erosion of epistemic trust in the digital environment through the use of artificial intelligence.

There are concerns about synthetic media undermining epistemic trust are not merely theoretical. Chesney and Citron have identified the concept of a “liar’s dividend”, the risk that the mere existence of realistic synthetic media and realistic representations will enable malicious actors to undermine genuine information by raising doubts about the origins of genuine information. They documented empirical research conducted by Vaccari and Chadwick which proved that viewing a deepfake political video resulted in its viewers experiencing higher levels of uncertainty and lower levels of trust regarding authentic news. Thus, the regulatory concern that sits behind IT Intermediary Amendment Rules 2026 goes well beyond isolated cases of fraud and instead addresses a structural erosion of trust associated with information. Thus, the legislature’s attention to synthetic media as a distinct category of inquiry is warranted, regardless of whether there is evidence of bad faith intention.

B. Indistinguishability standards to avoid over-breadth regulation

On the same note, the term ‘synthetic’, however, could be helpful to limit or narrow the scope of the types of deepfakes that regulations apply to, and therefore also avoid challenges on the basis of over-breadth since a regulation based upon the concept of being ‘artificially generated information’ would likely have been subject to an argument that it captured all outputs from any application of AI whether the output was harmful or harmless. In contrast, the current definition is based upon the concepts of deception and perceptual indistinguishability which help to increase its ability to pass the proportionality test for constitutional review, particularly under Article 19(1)(a) of the Constitution of India.

There is no statutory standard established in Indian procedural law, including the Bharatiya Sakshya Adhiniyam 2023, for determining  the criterion of “indistinguishability,” which is whether the synthetic output of content is perceptually or functionally indistinguishable from the output of a natural person, i.e., a registered or unregistered human author.The procedural concerns are twofold. First, courts and intermediaries will be required to answer the questions: (1) indistinguishable to whom? and (2) under what conditions? Second, a forensic examiner using either metadata or frequency analysis may be able to identify synthetic outputs that an ordinary user will not be able to identify, a fact acknowledged in the technical literature concerning detecting content generated by AI.

Given that there is no settled standard within Indian procedural law to determine the criterion of “indistinguishability,” it is unclear whether there will be sufficient evidentiary foundation upon which a court may find “indistinguishable.” The analogy drawn by the Supreme Court in Shreya Singhal v. Union of India is helpful in evaluating the constitutional implications of the lack of a settled standard establishing a definition of “indistinguishability.” The Court’s reasoning that vagueness in regulation burdens free speech creates a constitutional issue, which will similarly create a constitutional issue related to vagueness in the criterion of “indistinguishability.” A standard that could be workable may be based on whether a reasonable person of ordinary prudence would find the content indistinguishable from content generated by a natural person, thereby adopting similar principles from the average consumer test in cases of passing off and the standard for a reasonable reader in cases of defamation.

Lastly, the deviation from international definitions could indicate a purposive path to avoid harmonisation between jurisdictions. In many cases, intermediaries who operate across jurisdictions rely on compliance frameworks that are based upon the EU AI Act, or some other type of global standard using either “artificially-generated” or “AI-generated” content. Therefore, if there is a different definition of the terms used by each jurisdiction, the intermediary may also need to implement jurisdictionally-specific compliance mechanisms. Nevertheless, this deviation will likely allow India to develop a regulatory vocabulary that is customised to the misinformation issues that they are experiencing. This may also prevent a huge volume of cases in the form of judicial challenge, of this legislation, that uses EU AI as a standard to test the constitutionality or in a political sense, throw flak on the government for poor regulation.

V. Conclusion

The change from “artificially generated” to “synthetically generated” is significant not only in the meaning of the words but also in how they are defined in law. The dictionary definitions of these terms may blur the meaning between the two, but when using principles of legislative drafting, the meaning of these words has an intended purpose behind them. The new IT amendment rules suggest that the legislature is concerned with hyper-realistic content generated by AI technologies that could mislead users and alter or undermine the integrity of digital trust ecosystems.  The use of the term “synthetic” in the Rules appears to focus on outcomes rather than methods, deception rather than creation, and perceptual indistinctness rather than classifications based on technology.

Whether this new terminology creates regulatory clarity in India or causes uncertainty in interpretation will depend on the courts and how the enforcement of laws occurs through practice. However, one thing is certain: language used in the regulation of emerging technologies is not simply descriptive; it will be used to shape digital accountability in the future.

Lauren Prem is a 4th year B.com. LLB. students at The Tamil Nadu National Law University (TNNLU), Tiruchirappalli, Tamil Nadu.

Leave a comment