Queries on DGPSI-AI explained

The DGPSI-AI is a framework conceived for use by deployers of AI who are “Data Fiduciaries” under DPDPA 2023.

An interesting set of observations have been received recently from a professional regarding the framework. We welcome the comments as an opportunity to improve the framework over a period of time. In the meantime, let us have an academic debate to understand the concerns expressed and respond.

The observer  had made the following four observations as concerns related to the DGPSI-AI framework.

1. AI’s definition is strange and too broad. Lots of ordinary software has adaptive behavior (rules engines, auto-tuning systems, recommender heuristics, control systems). If you stretch “modify its own behavior,” you’ll start classifying non-AI automation as AI. Plus, within AI spectrum, only ML models may have self learning capabilities. Linear and statistical and decision tree models do not.

2. “AI risk = data protection risk = fiduciary risk”. That is legally and conceptually incorrect. DPDP Act governs personal data processing, not AI behavior as such. Many AI risks cited (hallucination, deception, emergent behavior, hypnosis theory) are safety / reliability / ethics risks, not privacy risks.

3. Unknown risk = significant risk” is a logical fallacy. Unknown ≠ high. Unknown risk can be negligible, bounded or mitigated through controls. Risk management is about estimating and bounding uncertainty,

4.Explainability treated as a legal obligation, not a contextual requirement. This is overstated. DPDP requires notice, not model explainability.

I would like to provide my personal response to these observations, as follows:

1. AI Definition

DGPSI has recommended adoption of a definition for AI which reflects an ability for automated change of the execution code based on the end results of the software without an intervention of a human for creating a modified version.

The “Rules Engine”, “auto tuning systems” or such other systems that may be part of the ordinary software are characteristic by the existence of a code for a given context and situation. If the decision rule fails, the software may either crash or use a default behaviour.  The outcome is therefore not driven by a self learning of the software. It is pre-programmed by a human being. Such software may have higher degree of automation than most software but need not be considered as AI in the strict sense.

Therefore, iff there is any AI model where the output is pre-determined, it can be excluded from the definition of AI by a DGPSI-AI auditor with suitable documentation.

Where the model self corrects and over a period of time transforms itself like metamorphosis into a new state, without a human intervention, the risk could be that further outputs may start exhibiting more and more hallucinations or unpredictable outcomes. The output data which may become input data for further use may get so poisoned that the difference between reality and artificial creation may vanish. Hence such behaviour is classified as AI.

In actual practice, we tend to use the term “AI” loosely to refer to any autonomous software with a higher degree of autonomy. We can exclude them from this definition. The model implementation specification MIS-AI-1 in the framework states as follows:

“The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process”

This implementation specification which require documentation for the purpose of compliance, may perhaps address the concern expressed.

2. AI Risk and Privacy Risk

The framework DGPSI-AI is presented in the specific context of a responsibility of a “Data Fiduciary” processing “Personal Data”.

Since non compliance of DPDPA leads to a financial risk of Rs 250 crores+, it is considered prudent for the data fiduciary to consider AI behavioural risks as risks that can lead to non compliance.

In the context of our usage, hallucination, rogue behaviour, etc which are termed “Safety” or “Ethics” related issues in AI are applied for recognizing “Unauthorized processing of personal data” and hence become risks that may result in hefty fines. We cannot justify with the Data protection board that the error happened because I was using AI and hence I must be excused.

Hence AI risks become Privacy Risks or DPDPA Non Compliance Risks.

3. Unknown Risk:

The behaviour of AI is by design  meant  to be  creative  and therefore is unpredictable. All Risks associated with the algorithm is not known even to the developer himself. They are definitely to be classified as “Unknown Risks” by the deployer.

We accept that Unknown Risk can be negligible. But we come to know of it only after the risk becomes known. A Fiduciary cannot assume that the risk when determined will be negligible. If he has to determine if he is a “Significant Data Fiduciary” or not, he should be able to justify that the risk is negligible ab-initio. This is provided for in the framework by MIS-AI-3 which suggests,

“Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process. “

This provides a possibility of “Absorbing” the “Unknown Risk” irrespective of its significance including ignoring the need to classify the deployer as a “Significant Data Fiduciary”.

Hence there is an in-built flexibility that addresses the concern.

4.Explainability

The term “Explainability” may be used by the AI industry in a particular manner. DGPSI-AI tries to use the term also to the legal liability of a data fiduciary to give a clear, transparent privacy notice.

A “Notice” from a “Fiduciary” requires to be clear, Understandable and transparent by the data principal and hence there is a duty for the Data Fiduciary to understand the AI algorithm himself.

It may not be necessary to share the Explainability document of the AI developer with the data principal in the privacy notice. But the Data Fiduciary should have a reasonable assurance that the algorithm does not cause any harm to the data principal and its decisions are reasonably understood by the Data Fiduciary.

Towards this objective, MIS-AI-6 states:

“The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals.”

I suppose this reasonably answers the concerns expressed. Further debate is welcome.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Privacy. Bookmark the permalink.