Mr. M. G. Kodandaram, IRS., Assistant Director (Retd), ADVOCATE and CONSULTANT, decodes the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025
- A Constitutional Moment in India’s Digital Governance
The Ministry of Electronics and Information Technology (MeitY) notified the ‘Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025[1]’, bringing into effect from 15 November 2025, a carefully crafted amendment to Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”). Issued under Section 87 of the Information Technology Act, 2000 (“IT Act”), the amendment recalibrates the procedural architecture governing the takedown of unlawful online content by intermediaries. This moment is significant not for expanding State power, but for disciplining its exercise in constitutionally sensitive domains.
At first glance, the amendment appears incremental. It neither expands the categories of prohibited content nor alters the substantive grounds on which speech may be restricted. But beneath this lies a profound constitutional intervention. By precisely defining how an intermediary may acquire “actual knowledge” under Section 79(3)(b) of the IT Act, the amendment restores procedural discipline, reinforces executive accountability, and re-anchors India’s intermediary liability regime in the jurisprudential logic of Shreya Singhal v. Union of India [2](2015).
It is interesting to note that this constitutionally grounded reform unfolds alongside a parallel and far more disruptive regulatory initiative: the proposed amendments addressing “synthetically generated information”[3] and deepfakes, particularly through a new Rule 4(1A). These draft proposals, still under consultation, seek to impose proactive verification and labelling obligations on Significant Social Media Intermediaries (“SSMIs”), thereby fundamentally altering the intermediary’s role from neutral conduit to active arbiter of authenticity. This divergence reveals two competing regulatory philosophies operating simultaneously within India’s digital governance framework.
While the notified 2025 amendment to Rule 3(1)(d) reflects a constitutionally grounded maturation of India’s intermediary liability framework, the parallel draft proposals on synthetic content threaten to unsettle the delicate balance between free speech, technological innovation, and regulatory accountability. Against this backdrop, the article traces the evolution of intermediary jurisprudence in India, analyses the constitutional logic underpinning the 2025 amendment, and compares India’s approach to AI-generated content with international regulatory models.
- Genesis of Intermediary Liability in India
The IT Act, 2000 was enacted at a time when intermediaries were largely perceived as passive facilitators of electronic communication. Section 79 embodied this understanding by providing a conditional “safe harbour” from liability for third-party content, modelled on notice-based liability regimes rather than prior restraint. The legislative intent was clear: intermediaries should not be compelled to pre-emptively police user speech, as such an obligation would be incompatible with both scale and constitutional free expression under Article 19(1)(a).
However, this immunity was never absolute. Section 79(2) subjected safe harbour to due diligence obligations, while Section 79(3)(b) withdrew protection where the intermediary failed to act upon receiving “actual knowledge” that its platform was being used to commit an unlawful act.
The first attempt to operationalise this framework came through the IT (Intermediary Guidelines) Rules, 2011. These rules, however, suffered from vagueness and overbreadth, effectively delegating censorship decisions to private platforms. The lack of procedural clarity created strong incentives for over-removal of content, prompting widespread criticism from civil society and constitutional scholars.
The constitutional reckoning arrived in 2015. In Shreya Singhal v. Union of India, (MANU/SC/0329/2015) the Supreme Court struck down Section 66A of the IT Act and, more importantly for intermediary law, read down Section 79(3)(b). The Court held that “actual knowledge” could arise only through a court order or a notification by an appropriate government agency, and not through private complaints or subjective assessments by intermediaries. This interpretation was a deliberate constitutional choice, designed to prevent intermediaries from becoming private adjudicators of legality and to mitigate chilling effects on speech.
The IT Rules, 2021 marked a second wave of digital regulation. They significantly expanded due diligence obligations, introduced a three-tier grievance redressal mechanism, and extended regulatory oversight to digital news publishers and OTT platforms. Subsequent amendments in 2022 and 2023 tightened compliance timelines and reporting obligations.
However, Rule 3(1)(d), the provision governing takedown of unlawful content, continued to attract constitutional concern, particularly in relation to procedural opacity and executive discretion. Its reference to “notification by the appropriate Government” lacked clarity on the rank of issuing officers, the requirement of reasons, and the existence of internal review. In practice, this opacity risked reviving the very private censorship dynamics that Shreya Singhal sought to dismantle. It is against this backdrop that the 2025 amendment assumes particular significance.
III. The 2025 Amendment to Rule 3(1)(d)
The substituted Rule 3(1)(d) reads as follows: “(d) an intermediary, on whose computer resource the information which is used to commit an unlawful act which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force is hosted, displayed, published, transmitted or stored shall, upon receiving the actual knowledge under clause (b) of sub-section (3) of section 79 of the Act on such information, remove or disable access to such information within thirty-six hours of the receipt of such actual knowledge, and such actual knowledge shall arise only in the following manner, namely:—
- by an order of a court of competent jurisdiction; or
- a reasoned intimation, in writing, —
- issued by an officer authorised for the purpose of issuing such intimation by the Appropriate Government or its agency, being not below the rank of Joint Secretary or an officer equivalent in rank or where an officer at such rank is not appointed, a Director or an officer equivalent in rank, to the Government of India or to the State Government, as the case may be, and, where so authorised, acting through a single corresponding officer in its authorised agency, where such agency is so appointed:
Provided that where such intimation is to be issued by the police administration, the authorised officer shall not be below the rank of Deputy Inspector General of Police, especially authorised by the Appropriate Government in this behalf:
Provided further that all such intimations shall be subject to periodic review by an officer not below the rank of the Secretary of the concerned Appropriate Government once in every month to ensure that such intimations are necessary, proportionate, and consistent with clause (b) of sub-section (3) of section 79 of the Act and this clause;
(II) clearly specifying the legal basis and statutory provision invoked, the nature of the unlawful act, and the specific uniform resource locator, identifier or other electronic location of the information, data or communication link required to be removed or disabled;”.
The above substituted Rule 3(1)(d) mandates that an intermediary must remove or disable access to information used to commit an unlawful act within thirty-six hours of receiving “actual knowledge” under Section 79(3)(b). The amendment operationalises “actual knowledge” through a closed and verifiable administrative design. Crucially, it exhaustively defines the modes through which such knowledge may arise.
Actual knowledge may arise through:
(a) an order of a court of competent jurisdiction; or
(b) a reasoned intimation in writing issued by a duly authorised government officer, subject to stringent safeguards.
These safeguards include:
(i) issuance by an officer not below the rank of Joint Secretary (or Director where such rank does not exist);
(ii) in the case of police authorities, issuance by an officer not below the rank of Deputy Inspector General of Police, specially empowered;
(iii) specification of the legal basis, statutory provision invoked, nature of the unlawful act, and precise URL or electronic identifier; and
(iv) mandatory monthly review by an officer not below the rank of Secretary to ensure necessity, proportionality, and consistency with Section 79(3)(b).
This architecture replaces vague executive notifications with a structured, reviewable, and senior-authorised process, restoring procedural discipline to content takedown.
- Transparency, Proportionality, and Constitutional Fidelity
From a constitutional perspective, the 2025 amendment is best understood as a reaffirmation of Shreya Singhal rather than a departure from it. The amendment reflects what may be described as procedural proportionality rather than substantive expansion.
Senior-level authorisation ensures political and administrative accountability. Reasoned intimations grounded in identifiable statutory provisions introduce legality and precision. The monthly review mechanism embeds proportionality within executive decision-making itself, acting as a safeguard against bureaucratic inertia and mission creep.
Importantly, the amendment does not expand the substantive grounds of censorship. It merely disciplines the process through which existing legal prohibitions are enforced, strengthening both the legitimacy and durability of State action.
- Practical Implications for Intermediaries and Users
For the State, the amendment bolsters enforcement credibility. By aligning takedown powers with constitutional safeguards, it insulates regulatory action from judicial invalidation and enhances public trust in digital governance.
For intermediaries, the amendment provides long-overdue clarity. Compliance obligations are now tethered to clearly identifiable triggers, reducing uncertainty and litigation risk. While the thirty-six-hour timeline remains demanding, intermediaries now know precisely when the clock begins to run.
For users, the amendment enhances procedural fairness. Content takedown decisions are embedded within a traceable administrative process, reducing the risk of arbitrary or excessive interference with lawful speech.
- Regulating Synthetically Generated Information
The rapid evolution of generative Artificial Intelligence (AI) has fundamentally transformed the digital information ecosystem. Technologies capable of producing highly realistic synthetic audio, visual, and textual content, often indistinguishable from authentic material, have expanded creative and commercial possibilities, while simultaneously intensifying risks of misinformation, impersonation, fraud, electoral manipulation, and erosion of public trust. It is against this backdrop that the Central Government proposed to notify the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, exercising powers under Section 87 of the Information Technology Act, 2000. The amendments represent a significant regulatory intervention aimed at addressing emerging AI-driven harms while preserving the foundational architecture of intermediary liability and safe harbour protection.
A defining feature of the 2025 Amendment Rules is the introduction of a statutory definition of “synthetically generated information.” By inserting clause (wa) in Rule 2(1), the Rules define such information as content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that reasonably appears to be authentic or true. The definition is deliberately broad and technology-neutral, ensuring regulatory durability amid rapidly evolving AI tools and techniques. Crucially, the focus is not on artificiality per se, but on the reasonable appearance of authenticity—thereby centring regulatory concern on deception, user harm, and misuse rather than benign or clearly fictional digital content.
To eliminate interpretational ambiguity, the Amendment Rules introduce sub-rule (1A) to Rule 2, clarifying that references to “information” in the context of unlawful acts under the IT Rules, 2021, including Rules 3 and 4, shall include synthetically generated information. This clarification is doctrinally significant. It ensures that AI-generated or manipulated content is not treated as a regulatory exception but is fully subsumed within the existing intermediary governance framework governing unlawful content, notice-and-takedown obligations, and enhanced due diligence requirements. By embedding synthetic content within the established statutory lexicon, the amendment avoids creating a parallel or fragmented regulatory regime.
At the level of intermediary protection, the 2025 amendments incorporate an important safeguard through a proviso to Rule 3(1)(b). This proviso clarifies that the removal or disabling of access to information, including synthetically generated information, undertaken in good faith, whether pursuant to user grievances or reasonable content moderation efforts, shall not be construed as a violation of the conditions for safe harbour under Section 79(2) of the IT Act. This provision reflects regulatory prudence, recognising that fear of losing statutory immunity can otherwise chill proactive content moderation. By explicitly protecting good-faith action, the Rules encourage responsible intermediary behaviour without diluting the safe harbour framework.
A notable innovation is the insertion of sub-rule (3) in Rule 3, which introduces targeted due diligence obligations for intermediaries that provide computer resources enabling the creation or modification of synthetically generated information. Such intermediaries are now required to ensure that every instance of synthetic content is clearly labelled or embedded with a permanent, unique metadata identifier. The Rules prescribe minimum visibility standards: in visual content, the label must cover at least ten percent of the display area, while in audio content, the disclosure must be audible during the initial ten percent of its duration. The prohibition on enabling the removal, suppression, or alteration of such identifiers reinforces the integrity and enforceability of the transparency mechanism. This approach reflects a regulatory preference for traceability and user awareness over outright prohibition.
Enhanced obligations are imposed on Significant Social Media Intermediaries (SSMIs) through the insertion of Rule 4(1A). Under this provision, SSMIs must obtain a declaration from users regarding whether uploaded content is synthetically generated. Beyond reliance on self-declaration, intermediaries are also required to deploy reasonable and proportionate technical measures—including automated tools—to verify the accuracy of such disclosures, having regard to the nature, format, and source of the content. Where content is identified as synthetic, the intermediary must ensure prominent labelling prior to its publication or display. Importantly, the amendments introduce a compliance-linked accountability mechanism: an intermediary that knowingly permits, promotes, or fails to act upon non-compliant synthetic content is deemed to have failed to exercise due diligence, thereby risking loss of safe harbour protection.
VII. Accompanying Explanatory Notes
The Explanatory Note[4] accompanying the proposed amendments provides critical insight into the Government’s regulatory rationale. Anchored in the objective of ensuring an “Open, Safe, Trusted and Accountable Internet,” the Note identifies the proliferation of highly realistic AI-generated content—particularly deepfakes—as a systemic threat capable of inflicting multidimensional harm. These harms include non-consensual intimate imagery, financial fraud, impersonation, large-scale misinformation, electoral interference, and a broader erosion of trust in digital ecosystems. Recognising that synthetic content increasingly blurs the line between truth and fabrication, the Note justifies the need for strengthened intermediary due diligence, especially for platforms with significant reach and influence.
The Explanatory Note clarifies that synthetically generated information squarely falls within the ambit of “information” used to commit unlawful acts under existing provisions, including Rules 3(1)(b) and 3(1)(d), thereby aligning AI-generated harms with established notice-and-takedown and lawful order-based mechanisms. At the same time, it signals a decisive policy shift toward anticipatory regulation. Unlike the reactive, order-driven obligations under Rule 3(1)(d), the proposed framework for synthetic content is proactive, continuous, and technology-dependent. By mandating labelling, metadata embedding, user declarations, and verification measures, the State seeks to embed transparency and accountability directly into platform governance structures.
Nevertheless, the Explanatory Note also reflects an attempt to balance enhanced accountability with intermediary protection. It expressly safeguards good-faith removal of harmful synthetic content under Section 79(2) of the IT Act, thereby acknowledging constitutional concerns surrounding over-censorship and chilling effects on free expression. This balance underscores the regulatory intent to recalibrate, rather than dismantle, intermediary liability in response to generative AI.
Collectively, the 2025 Amendment Rules represent a calibrated and constitutionally conscious response to the challenges posed by AI-generated and synthetic content. Rather than imposing blanket prohibitions or content-based censorship, the framework prioritises transparency, traceability, and informed user choice, while remaining anchored in the safe harbour principles of the IT Act. By integrating synthetic content regulation within the existing intermediary governance architecture, the amendments seek to preserve innovation and free expression while addressing demonstrable harms. As generative technologies continue to evolve, the 2025 framework provides a foundational legal architecture, i.e. one that signals a shift toward anticipatory governance, yet remains attentive to constitutional limits and the need for regulatory restraint.
VIII. Safe Harbour Under Strain
Nevertheless, the Note reveals a decisive policy shift toward anticipatory regulation, signalling the State’s intention to move beyond reactive enforcement and embed continuous transparency and verification obligations within platform governance structures, thereby recalibrating the contours of intermediary liability in response to the perceived risks posed by generative artificial intelligence.
Section 79 was designed to ensure intermediaries are not compelled to police content proactively. The draft synthetic content rules risk reintroducing constructive knowledge through the back door. By mandating verification tools, the law presumes detection capacity that does not yet reliably exist.
Deepfake detection technologies remain imperfect. The regulatory asymmetry is stark: intermediaries face little risk for over-removal but significant liability for under-detection. The rational response is over-censorship. This regulatory asymmetry, rather than malicious intent, threatens the continued viability of intermediary neutrality.
- Enduring Relevance of Shreya Singhal
The Supreme Court in Shreya Singhal was acutely conscious of chilling effects. The draft synthetic content rules risk recreating this environment through algorithmic enforcement. While a proviso protects intermediaries removing synthetic content, the real risk lies in loss of safe harbour for failure to detect, skewing incentives toward suppression of lawful speech.
The European Union’s AI Act, adopted in 2024, offers a useful contrast in regulatory design rather than substantive objectives. Article 50 imposes transparency obligations on deployers of AI systems, not intermediaries. The EU model preserves intermediary safe harbour, recognises technical limits, and adopts a risk-based approach with exemptions for artistic and satirical expression.
Atypically, the 2025 amendment to Rule 3(1)(d) demonstrates that India already possesses a constitutionally sound mechanism to address unlawful content, including harmful deepfakes. The central regulatory question is not whether to regulate AI-generated harm, but how. Targeted orders, criminal law, civil remedies, and public investment in AI forensics offer more precise responses than continuous platform monitoring.
- Choosing the Future of India’s Digital Constitution
The 2025 amendment to Rule 3(1)(d) reflects measured, transparent, and accountable digital governance. By restoring procedural discipline to content takedown and aligning executive action with constitutional safeguards, it reaffirms the intermediary’s role as a neutral conduit rather than an adjudicator of legality. The amendment demonstrates that India already possesses a constitutionally sound mechanism to address unlawful online content, including harmful manifestations of AI-generated material, through targeted orders, clearly defined authority, and built-in proportionality review.
The parallel push toward proactive verification of synthetically generated content, however, threatens to unsettle this carefully restored balance. By imposing continuous, technology-dependent obligations on intermediaries, particularly Significant Social Media Intermediaries, the draft framework risks transforming platforms from facilitators of speech into instruments of anticipatory regulation. This shift carries significant implications for free expression, innovation, and intermediary neutrality, especially in light of the technical limitations of deepfake detection and the asymmetric liability incentives that favour over-removal.
India thus stands at a constitutional crossroads, i.e. between preserving intermediaries as neutral conduits of speech, subject to clearly triggered and reviewable takedown obligations, and recasting them as active monitors responsible for verifying authenticity at scale. The regulatory choices made in navigating AI-generated content will shape not merely platform governance, but the contours of India’s digital constitutional order. Whether the future lies in procedural restraint anchored in Shreya Singhal, or in expansive anticipatory regulation driven by technological anxiety, will determine how free speech, accountability, and innovation coexist in India’s democratic digital ecosystem.
Mr. M. G. Kodandaram, IRS.
Reference
[1]https://www.meity.gov.in/static/uploads/2025/10/708f6a344c74249c2e1bbb6890342f80.pdf,
[2] https://indiankanoon.org/doc/110813550/
[3]https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf
[4]https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf






