GDPR implementation some times can be crazy

Recently there was an interesting Austrian Supervisory authority decision imposing a fine of Euro 600 on a owner of Tesla Car . The car owner had installed seven cameras which could film when the car was parked recognizing possible threats. The argument of the Supervisory authority was that it could film people who were not threats and the data subject was not informed about the filming.

This decision indicates that the “Security” of the individual was considered subordinate to the principle of “Privacy”. Secondly it did not matter that the Car owner had no way to filter the recording to only those persons who were considered threats and delete those who were not.

There is no doubt that this decision is one of those crazy decisions for which GDPR supervisory authorities are known. However the new Digital Omnibus Proposal could change things here since the owner of the cameras has no identity of the persons whose pictures have been captured and hence the data will not be considered as “Personal Data”.

If the persons in the camera are identified, it would be through an  additional process of matching of the faces with a facial recognition software and who so ever uses this process would be liable for infringement  of privacy and obtaining consent. The Car owner who has  recorded the video and does not distribute it or sell it for exploitation should be free from liability.

Further, if the data is  captured by the cameras and is over written automatically, and  referred only when there is a security incident, then the capture automatically get deleted within a reasonable time and hence should not be a violation of privacy principles.

Further the car owner should consider that it is Tesla which perhaps has failed to provide appropriate guidelines for the Car users on how to handle the captures  without violating GDPR. Tesla should perhaps indemnify the car owner.

One more point to debate is that if the Car is parked in a public place, the captures would be of the public space. Hence if any body else expose themselves in front of this camera, they would perhaps be also considered as being in public place. It is our view that when a person enters a “Public Space” he is voluntarily exposing himself to public and  should not commit any activity which he would like the privacy law to protect.

Further, to consider  an individual car owner trying to protect  his property as a Data Controller and imposing him the liabilities of GDPR  Compliance is simply crazy. By this standard, all “Dash board Cameras” and “Reverse Parking Cameras” are also violating GDPR because any body can come in front of such cameras.

The decision is unacceptable  and  should be considered as an aberration.

The case opens up many academic points for debate. Comments are welcome.

On the lighter  side, now the potential for GDPR Compliance training is open  to all individuals who may be considered as “Data Controllers” whenever they use their mobiles to take pictures in public or install CCTV cameras anywhere!

It was alarming to see that there were 210 decisions from different supervisory authorities since 2020 where GDPR authorities have fined individuals. This requires a debate of its own.

Naavi

Ref: https://www.enforcementtracker.com/ETid-2975

Posted in Privacy | Leave a comment

Governing AI-Generated Content: Intermediary Compliance, Free Speech, and Regulatory Prudence

Mr. M. G. Kodandaram, IRS., Assistant Director (Retd), ADVOCATE and CONSULTANT, decodes the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025

  1. A Constitutional Moment in India’s Digital Governance

The Ministry of Electronics and Information Technology (MeitY) notified the ‘Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025[1]’, bringing into effect from 15 November 2025, a carefully crafted amendment to Rule 3(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”). Issued under Section 87 of the Information Technology Act, 2000 (“IT Act”), the amendment recalibrates the procedural architecture governing the takedown of unlawful online content by intermediaries. This moment is significant not for expanding State power, but for disciplining its exercise in constitutionally sensitive domains.

At first glance, the amendment appears incremental. It neither expands the categories of prohibited content nor alters the substantive grounds on which speech may be restricted. But beneath this lies a profound constitutional intervention. By precisely defining how an intermediary may acquire “actual knowledge” under Section 79(3)(b) of the IT Act, the amendment restores procedural discipline, reinforces executive accountability, and re-anchors India’s intermediary liability regime in the jurisprudential logic of Shreya Singhal v. Union of India [2](2015).

It is interesting to note that this constitutionally grounded reform unfolds alongside a parallel and far more disruptive regulatory initiative: the proposed amendments addressing “synthetically generated information”[3] and deepfakes, particularly through a new Rule 4(1A). These draft proposals, still under consultation, seek to impose proactive verification and labelling obligations on Significant Social Media Intermediaries (“SSMIs”), thereby fundamentally altering the intermediary’s role from neutral conduit to active arbiter of authenticity. This divergence reveals two competing regulatory philosophies operating simultaneously within India’s digital governance framework.

While the notified 2025 amendment to Rule 3(1)(d) reflects a constitutionally grounded maturation of India’s intermediary liability framework, the parallel draft proposals on synthetic content threaten to unsettle the delicate balance between free speech, technological innovation, and regulatory accountability. Against this backdrop, the article traces the evolution of intermediary jurisprudence in India, analyses the constitutional logic underpinning the 2025 amendment, and compares India’s approach to AI-generated content with international regulatory models.

  1. Genesis of Intermediary Liability in India

The IT Act, 2000 was enacted at a time when intermediaries were largely perceived as passive facilitators of electronic communication. Section 79 embodied this understanding by providing a conditional “safe harbour” from liability for third-party content, modelled on notice-based liability regimes rather than prior restraint. The legislative intent was clear: intermediaries should not be compelled to pre-emptively police user speech, as such an obligation would be incompatible with both scale and constitutional free expression under Article 19(1)(a).

However, this immunity was never absolute. Section 79(2) subjected safe harbour to due diligence obligations, while Section 79(3)(b) withdrew protection where the intermediary failed to act upon receiving “actual knowledge” that its platform was being used to commit an unlawful act.

The first attempt to operationalise this framework came through the IT (Intermediary Guidelines) Rules, 2011. These rules, however, suffered from vagueness and overbreadth, effectively delegating censorship decisions to private platforms. The lack of procedural clarity created strong incentives for over-removal of content, prompting widespread criticism from civil society and constitutional scholars.

The constitutional reckoning arrived in 2015. In Shreya Singhal v. Union of India, (MANU/SC/0329/2015) the Supreme Court struck down Section 66A of the IT Act and, more importantly for intermediary law, read down Section 79(3)(b). The Court held that “actual knowledge” could arise only through a court order or a notification by an appropriate government agency, and not through private complaints or subjective assessments by intermediaries. This interpretation was a deliberate constitutional choice, designed to prevent intermediaries from becoming private adjudicators of legality and to mitigate chilling effects on speech.

The IT Rules, 2021 marked a second wave of digital regulation. They significantly expanded due diligence obligations, introduced a three-tier grievance redressal mechanism, and extended regulatory oversight to digital news publishers and OTT platforms. Subsequent amendments in 2022 and 2023 tightened compliance timelines and reporting obligations.

However, Rule 3(1)(d), the provision governing takedown of unlawful content, continued to attract constitutional concern, particularly in relation to procedural opacity and executive discretion. Its reference to “notification by the appropriate Government” lacked clarity on the rank of issuing officers, the requirement of reasons, and the existence of internal review. In practice, this opacity risked reviving the very private censorship dynamics that Shreya Singhal sought to dismantle. It is against this backdrop that the 2025 amendment assumes particular significance.

III. The 2025 Amendment to Rule 3(1)(d)

The substituted Rule 3(1)(d) reads as follows: (d) an intermediary, on whose computer resource the information which is used to commit an unlawful act which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force is hosted, displayed, published, transmitted or stored shall, upon receiving the actual knowledge under clause (b) of sub-section (3) of section 79 of the Act on such information, remove or disable access to such information within thirty-six hours of the receipt of such actual knowledge, and such actual knowledge shall arise only in the following manner, namely:—

  • by an order of a court of competent jurisdiction; or
  • a reasoned intimation, in writing, —
  • issued by an officer authorised for the purpose of issuing such intimation by the Appropriate Government or its agency, being not below the rank of Joint Secretary or an officer equivalent in rank or where an officer at such rank is not appointed, a Director or an officer equivalent in rank, to the Government of India or to the State Government, as the case may be, and, where so authorised, acting through a single corresponding officer in its authorised agency, where such agency is so appointed:

Provided that where such intimation is to be issued by the police administration, the authorised officer shall not be below the rank of Deputy Inspector General of Police, especially authorised by the Appropriate Government in this behalf:

Provided further that all such intimations shall be subject to periodic review by an officer not below the rank of the Secretary of the concerned Appropriate Government once in every month to ensure that such intimations are necessary, proportionate, and consistent with clause (b) of sub-section (3) of section 79 of the Act and this clause;

(II) clearly specifying the legal basis and statutory provision invoked, the nature of the unlawful act, and the specific uniform resource locator, identifier or other electronic location of the information, data or communication link required to be removed or disabled;”.

The above substituted Rule 3(1)(d) mandates that an intermediary must remove or disable access to information used to commit an unlawful act within thirty-six hours of receiving “actual knowledge” under Section 79(3)(b). The amendment operationalises “actual knowledge” through a closed and verifiable administrative design. Crucially, it exhaustively defines the modes through which such knowledge may arise.

Actual knowledge may arise through:
(a) an order of a court of competent jurisdiction; or
(b) a reasoned intimation in writing issued by a duly authorised government officer, subject to stringent safeguards.

These safeguards include:
(i) issuance by an officer not below the rank of Joint Secretary (or Director where such rank does not exist);
(ii) in the case of police authorities, issuance by an officer not below the rank of Deputy Inspector General of Police, specially empowered;
(iii) specification of the legal basis, statutory provision invoked, nature of the unlawful act, and precise URL or electronic identifier; and
(iv) mandatory monthly review by an officer not below the rank of Secretary to ensure necessity, proportionality, and consistency with Section 79(3)(b).

This architecture replaces vague executive notifications with a structured, reviewable, and senior-authorised process, restoring procedural discipline to content takedown.

  1. Transparency, Proportionality, and Constitutional Fidelity

From a constitutional perspective, the 2025 amendment is best understood as a reaffirmation of Shreya Singhal rather than a departure from it. The amendment reflects what may be described as procedural proportionality rather than substantive expansion.

Senior-level authorisation ensures political and administrative accountability. Reasoned intimations grounded in identifiable statutory provisions introduce legality and precision. The monthly review mechanism embeds proportionality within executive decision-making itself, acting as a safeguard against bureaucratic inertia and mission creep.

Importantly, the amendment does not expand the substantive grounds of censorship. It merely disciplines the process through which existing legal prohibitions are enforced, strengthening both the legitimacy and durability of State action.

  1. Practical Implications for Intermediaries and Users

For the State, the amendment bolsters enforcement credibility. By aligning takedown powers with constitutional safeguards, it insulates regulatory action from judicial invalidation and enhances public trust in digital governance.

For intermediaries, the amendment provides long-overdue clarity. Compliance obligations are now tethered to clearly identifiable triggers, reducing uncertainty and litigation risk. While the thirty-six-hour timeline remains demanding, intermediaries now know precisely when the clock begins to run.

For users, the amendment enhances procedural fairness. Content takedown decisions are embedded within a traceable administrative process, reducing the risk of arbitrary or excessive interference with lawful speech.

  1. Regulating Synthetically Generated Information

The rapid evolution of generative Artificial Intelligence (AI) has fundamentally transformed the digital information ecosystem. Technologies capable of producing highly realistic synthetic audio, visual, and textual content, often indistinguishable from authentic material, have expanded creative and commercial possibilities, while simultaneously intensifying risks of misinformation, impersonation, fraud, electoral manipulation, and erosion of public trust. It is against this backdrop that the Central Government proposed to notify the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, exercising powers under Section 87 of the Information Technology Act, 2000. The amendments represent a significant regulatory intervention aimed at addressing emerging AI-driven harms while preserving the foundational architecture of intermediary liability and safe harbour protection.

A defining feature of the 2025 Amendment Rules is the introduction of a statutory definition of “synthetically generated information.” By inserting clause (wa) in Rule 2(1), the Rules define such information as content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that reasonably appears to be authentic or true. The definition is deliberately broad and technology-neutral, ensuring regulatory durability amid rapidly evolving AI tools and techniques. Crucially, the focus is not on artificiality per se, but on the reasonable appearance of authenticity—thereby centring regulatory concern on deception, user harm, and misuse rather than benign or clearly fictional digital content.

To eliminate interpretational ambiguity, the Amendment Rules introduce sub-rule (1A) to Rule 2, clarifying that references to “information” in the context of unlawful acts under the IT Rules, 2021, including Rules 3 and 4, shall include synthetically generated information. This clarification is doctrinally significant. It ensures that AI-generated or manipulated content is not treated as a regulatory exception but is fully subsumed within the existing intermediary governance framework governing unlawful content, notice-and-takedown obligations, and enhanced due diligence requirements. By embedding synthetic content within the established statutory lexicon, the amendment avoids creating a parallel or fragmented regulatory regime.

At the level of intermediary protection, the 2025 amendments incorporate an important safeguard through a proviso to Rule 3(1)(b). This proviso clarifies that the removal or disabling of access to information, including synthetically generated information, undertaken in good faith, whether pursuant to user grievances or reasonable content moderation efforts, shall not be construed as a violation of the conditions for safe harbour under Section 79(2) of the IT Act. This provision reflects regulatory prudence, recognising that fear of losing statutory immunity can otherwise chill proactive content moderation. By explicitly protecting good-faith action, the Rules encourage responsible intermediary behaviour without diluting the safe harbour framework.

A notable innovation is the insertion of sub-rule (3) in Rule 3, which introduces targeted due diligence obligations for intermediaries that provide computer resources enabling the creation or modification of synthetically generated information. Such intermediaries are now required to ensure that every instance of synthetic content is clearly labelled or embedded with a permanent, unique metadata identifier. The Rules prescribe minimum visibility standards: in visual content, the label must cover at least ten percent of the display area, while in audio content, the disclosure must be audible during the initial ten percent of its duration. The prohibition on enabling the removal, suppression, or alteration of such identifiers reinforces the integrity and enforceability of the transparency mechanism. This approach reflects a regulatory preference for traceability and user awareness over outright prohibition.

Enhanced obligations are imposed on Significant Social Media Intermediaries (SSMIs) through the insertion of Rule 4(1A). Under this provision, SSMIs must obtain a declaration from users regarding whether uploaded content is synthetically generated. Beyond reliance on self-declaration, intermediaries are also required to deploy reasonable and proportionate technical measures—including automated tools—to verify the accuracy of such disclosures, having regard to the nature, format, and source of the content. Where content is identified as synthetic, the intermediary must ensure prominent labelling prior to its publication or display. Importantly, the amendments introduce a compliance-linked accountability mechanism: an intermediary that knowingly permits, promotes, or fails to act upon non-compliant synthetic content is deemed to have failed to exercise due diligence, thereby risking loss of safe harbour protection.

VII. Accompanying Explanatory Notes

The Explanatory Note[4] accompanying the proposed amendments provides critical insight into the Government’s regulatory rationale. Anchored in the objective of ensuring an “Open, Safe, Trusted and Accountable Internet,” the Note identifies the proliferation of highly realistic AI-generated content—particularly deepfakes—as a systemic threat capable of inflicting multidimensional harm. These harms include non-consensual intimate imagery, financial fraud, impersonation, large-scale misinformation, electoral interference, and a broader erosion of trust in digital ecosystems. Recognising that synthetic content increasingly blurs the line between truth and fabrication, the Note justifies the need for strengthened intermediary due diligence, especially for platforms with significant reach and influence.

The Explanatory Note clarifies that synthetically generated information squarely falls within the ambit of “information” used to commit unlawful acts under existing provisions, including Rules 3(1)(b) and 3(1)(d), thereby aligning AI-generated harms with established notice-and-takedown and lawful order-based mechanisms. At the same time, it signals a decisive policy shift toward anticipatory regulation. Unlike the reactive, order-driven obligations under Rule 3(1)(d), the proposed framework for synthetic content is proactive, continuous, and technology-dependent. By mandating labelling, metadata embedding, user declarations, and verification measures, the State seeks to embed transparency and accountability directly into platform governance structures.

Nevertheless, the Explanatory Note also reflects an attempt to balance enhanced accountability with intermediary protection. It expressly safeguards good-faith removal of harmful synthetic content under Section 79(2) of the IT Act, thereby acknowledging constitutional concerns surrounding over-censorship and chilling effects on free expression. This balance underscores the regulatory intent to recalibrate, rather than dismantle, intermediary liability in response to generative AI.

Collectively, the 2025 Amendment Rules represent a calibrated and constitutionally conscious response to the challenges posed by AI-generated and synthetic content. Rather than imposing blanket prohibitions or content-based censorship, the framework prioritises transparency, traceability, and informed user choice, while remaining anchored in the safe harbour principles of the IT Act. By integrating synthetic content regulation within the existing intermediary governance architecture, the amendments seek to preserve innovation and free expression while addressing demonstrable harms. As generative technologies continue to evolve, the 2025 framework provides a foundational legal architecture, i.e. one that signals a shift toward anticipatory governance, yet remains attentive to constitutional limits and the need for regulatory restraint.

VIII. Safe Harbour Under Strain

Nevertheless, the Note reveals a decisive policy shift toward anticipatory regulation, signalling the State’s intention to move beyond reactive enforcement and embed continuous transparency and verification obligations within platform governance structures, thereby recalibrating the contours of intermediary liability in response to the perceived risks posed by generative artificial intelligence.

Section 79 was designed to ensure intermediaries are not compelled to police content proactively. The draft synthetic content rules risk reintroducing constructive knowledge through the back door. By mandating verification tools, the law presumes detection capacity that does not yet reliably exist.

Deepfake detection technologies remain imperfect. The regulatory asymmetry is stark: intermediaries face little risk for over-removal but significant liability for under-detection. The rational response is over-censorship. This regulatory asymmetry, rather than malicious intent, threatens the continued viability of intermediary neutrality.

  1. Enduring Relevance of Shreya Singhal

The Supreme Court in Shreya Singhal was acutely conscious of chilling effects. The draft synthetic content rules risk recreating this environment through algorithmic enforcement. While a proviso protects intermediaries removing synthetic content, the real risk lies in loss of safe harbour for failure to detect, skewing incentives toward suppression of lawful speech.

The European Union’s AI Act, adopted in 2024, offers a useful contrast in regulatory design rather than substantive objectives. Article 50 imposes transparency obligations on deployers of AI systems, not intermediaries. The EU model preserves intermediary safe harbour, recognises technical limits, and adopts a risk-based approach with exemptions for artistic and satirical expression.

Atypically, the 2025 amendment to Rule 3(1)(d) demonstrates that India already possesses a constitutionally sound mechanism to address unlawful content, including harmful deepfakes. The central regulatory question is not whether to regulate AI-generated harm, but how. Targeted orders, criminal law, civil remedies, and public investment in AI forensics offer more precise responses than continuous platform monitoring.

  1. Choosing the Future of India’s Digital Constitution

The 2025 amendment to Rule 3(1)(d) reflects measured, transparent, and accountable digital governance. By restoring procedural discipline to content takedown and aligning executive action with constitutional safeguards, it reaffirms the intermediary’s role as a neutral conduit rather than an adjudicator of legality. The amendment demonstrates that India already possesses a constitutionally sound mechanism to address unlawful online content, including harmful manifestations of AI-generated material, through targeted orders, clearly defined authority, and built-in proportionality review.

The parallel push toward proactive verification of synthetically generated content, however, threatens to unsettle this carefully restored balance. By imposing continuous, technology-dependent obligations on intermediaries, particularly Significant Social Media Intermediaries, the draft framework risks transforming platforms from facilitators of speech into instruments of anticipatory regulation. This shift carries significant implications for free expression, innovation, and intermediary neutrality, especially in light of the technical limitations of deepfake detection and the asymmetric liability incentives that favour over-removal.

India thus stands at a constitutional crossroads, i.e. between preserving intermediaries as neutral conduits of speech, subject to clearly triggered and reviewable takedown obligations, and recasting them as active monitors responsible for verifying authenticity at scale. The regulatory choices made in navigating AI-generated content will shape not merely platform governance, but the contours of India’s digital constitutional order. Whether the future lies in procedural restraint anchored in Shreya Singhal, or in expansive anticipatory regulation driven by technological anxiety, will determine how free speech, accountability, and innovation coexist in India’s democratic digital ecosystem.

Mr. M. G. Kodandaram, IRS.

Reference

[1]https://www.meity.gov.in/static/uploads/2025/10/708f6a344c74249c2e1bbb6890342f80.pdf,

[2] https://indiankanoon.org/doc/110813550/ 

[3]https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf

[4]https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf

Posted in Privacy | Leave a comment

Non EU Data Processors under the radar of GDPR Supervisory authorities for fines

It appears that EU GDPR authorities are now on a Global Data Warfare extending the GDPR fines on non-EU data processors.

In a recent  case, CNIL, the French authority has fined a SaaS provider a fine of 1 Million Euros.

Naavi has several times addressed the issue of such fines on Indian data processors and the need for Indian Government to have a protective shield. This  has been ignored by Meity all along. Perhaps this needs to be addressed once again.

In the instant case (See Details here ), On December 11, 2025, CNIL sanctioned Mobius Solutions Ltd, a subcontractor, an  Israeli Company a fine of 1 million Euros for data leak.

The violation was “Failure to delete data at the  end of Contractual relationship”.

MOBIUS SOLUTIONS LTD retained a copy of the data of more than 46 million DEEZER users after the end of their contractual relationship, despite its obligation to delete all such data at the end of the contract. The company was  also found to have used client data to improve its own services. Further the company had failed to maintain a required register of processing activities.

Unfortunately the data leaked  into the Dark Web causing the CNIL to act.

In November 2022, CNIL had been notified about the data breach by the Controller. Data from 12.7 to 21.6 million EU users (including 9.8 million in France)—including names, ages, email addresses, and listening habits—had been posted on the dark web. The platform identified its former subcontractor, which had provided personalized advertising services, as the source of the breach. The CNIL conducted checks in 2023 and 2024, followed by an investigation in 2025, which uncovered multiple GDPR violations by the subcontractor.

In this context, it is important to note that for Indian  data processors of GDPR data processing, FDPPI has released DGPSI-GDPR as a framework of compliance. Hopefully this will assist the Indian Companies to mitigate  the GDPR Risks.

It may however be noted that the EU approach on GDPR Compliance has been predatory and the cross border transfer conditions are legally not amenable with local laws. Hence risk can be mitigated but not fully eliminated. However it would be better than ignoring compliance.

Also Refer: 

Fox Rothshield

 Global Policy Watch  

 

Posted in Privacy | Leave a comment

Cyber Law College/FDPPI upgrade the online Courses

We have pleasure to inform that the course content of Module I, Module G and  the complete C.DPO.DA. courses conducted by Cyber Law College under the FDPPI certification scheme have been upgraded to the latest versions.

Accordingly all registrations from 1st January 2026 will be  eligible for additional videos.

The updation process is currently in progress. Kindly send an  e-mail to naavi in case necessary.

Cyber Law College is also introducing a separate training program for GDPR specialization which will include

  1. GDPR the law
  2. GDPR member state Laws
  3. GDPR Case studies
  4. GDPR digital Omnibus Proposal
  5. ISO 27701:2025  for GDPR

This program will be called “Master in GDPR Compliance” and should be useful for all DPOs who are currently working in  the GDPR domain.

This new course will be launched in January 2026.

Naavi

Also Refer: CNIL Fines Non EU Data Processor

Posted in Privacy | Leave a comment

Queries on DGPSI-AI explained

The DGPSI-AI is a framework conceived for use by deployers of AI who are “Data Fiduciaries” under DPDPA 2023.

An interesting set of observations have been received recently from a professional regarding the framework. We welcome the comments as an opportunity to improve the framework over a period of time. In the meantime, let us have an academic debate to understand the concerns expressed and respond.

The observer  had made the following four observations as concerns related to the DGPSI-AI framework.

1. AI’s definition is strange and too broad. Lots of ordinary software has adaptive behavior (rules engines, auto-tuning systems, recommender heuristics, control systems). If you stretch “modify its own behavior,” you’ll start classifying non-AI automation as AI. Plus, within AI spectrum, only ML models may have self learning capabilities. Linear and statistical and decision tree models do not.

2. “AI risk = data protection risk = fiduciary risk”. That is legally and conceptually incorrect. DPDP Act governs personal data processing, not AI behavior as such. Many AI risks cited (hallucination, deception, emergent behavior, hypnosis theory) are safety / reliability / ethics risks, not privacy risks.

3. Unknown risk = significant risk” is a logical fallacy. Unknown ≠ high. Unknown risk can be negligible, bounded or mitigated through controls. Risk management is about estimating and bounding uncertainty,

4.Explainability treated as a legal obligation, not a contextual requirement. This is overstated. DPDP requires notice, not model explainability.

I would like to provide my personal response to these observations, as follows:

1. AI Definition

DGPSI has recommended adoption of a definition for AI which reflects an ability for automated change of the execution code based on the end results of the software without an intervention of a human for creating a modified version.

The “Rules Engine”, “auto tuning systems” or such other systems that may be part of the ordinary software are characteristic by the existence of a code for a given context and situation. If the decision rule fails, the software may either crash or use a default behaviour.  The outcome is therefore not driven by a self learning of the software. It is pre-programmed by a human being. Such software may have higher degree of automation than most software but need not be considered as AI in the strict sense.

Therefore, iff there is any AI model where the output is pre-determined, it can be excluded from the definition of AI by a DGPSI-AI auditor with suitable documentation.

Where the model self corrects and over a period of time transforms itself like metamorphosis into a new state, without a human intervention, the risk could be that further outputs may start exhibiting more and more hallucinations or unpredictable outcomes. The output data which may become input data for further use may get so poisoned that the difference between reality and artificial creation may vanish. Hence such behaviour is classified as AI.

In actual practice, we tend to use the term “AI” loosely to refer to any autonomous software with a higher degree of autonomy. We can exclude them from this definition. The model implementation specification MIS-AI-1 in the framework states as follows:

“The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process”

This implementation specification which require documentation for the purpose of compliance, may perhaps address the concern expressed.

2. AI Risk and Privacy Risk

The framework DGPSI-AI is presented in the specific context of a responsibility of a “Data Fiduciary” processing “Personal Data”.

Since non compliance of DPDPA leads to a financial risk of Rs 250 crores+, it is considered prudent for the data fiduciary to consider AI behavioural risks as risks that can lead to non compliance.

In the context of our usage, hallucination, rogue behaviour, etc which are termed “Safety” or “Ethics” related issues in AI are applied for recognizing “Unauthorized processing of personal data” and hence become risks that may result in hefty fines. We cannot justify with the Data protection board that the error happened because I was using AI and hence I must be excused.

Hence AI risks become Privacy Risks or DPDPA Non Compliance Risks.

3. Unknown Risk:

The behaviour of AI is by design  meant  to be  creative  and therefore is unpredictable. All Risks associated with the algorithm is not known even to the developer himself. They are definitely to be classified as “Unknown Risks” by the deployer.

We accept that Unknown Risk can be negligible. But we come to know of it only after the risk becomes known. A Fiduciary cannot assume that the risk when determined will be negligible. If he has to determine if he is a “Significant Data Fiduciary” or not, he should be able to justify that the risk is negligible ab-initio. This is provided for in the framework by MIS-AI-3 which suggests,

“Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process. “

This provides a possibility of “Absorbing” the “Unknown Risk” irrespective of its significance including ignoring the need to classify the deployer as a “Significant Data Fiduciary”.

Hence there is an in-built flexibility that addresses the concern.

4.Explainability

The term “Explainability” may be used by the AI industry in a particular manner. DGPSI-AI tries to use the term also to the legal liability of a data fiduciary to give a clear, transparent privacy notice.

A “Notice” from a “Fiduciary” requires to be clear, Understandable and transparent by the data principal and hence there is a duty for the Data Fiduciary to understand the AI algorithm himself.

It may not be necessary to share the Explainability document of the AI developer with the data principal in the privacy notice. But the Data Fiduciary should have a reasonable assurance that the algorithm does not cause any harm to the data principal and its decisions are reasonably understood by the Data Fiduciary.

Towards this objective, MIS-AI-6 states:

“The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals.”

I suppose this reasonably answers the concerns expressed. Further debate is welcome.

Naavi

Posted in Privacy | Leave a comment

Vinod Sreedharan puts a creative touch to DGPSI-AI

Mr Vinod  Sreedharan is a AI expert with a creative bent of mind. He has applied his creative thoughts to give a visual imagery touch to “Taming of DPDPA and AI with DGPSI-AI”

The complete document above in PDF format is available here 

Posted in Privacy | Leave a comment