We refer to the DGPSI-AI framework which is an extension of DGPSI framework for DPDPA compliance. DGPSI-AI covers the requirements of a AI deployer who is a data fiduciary and has obligations under the DPDPA.
This framework which is a pioneering effort in India on how voluntary AI regulation could be brought in through the framework though we may not have a full fledged law on AI in place. In EU, the EUAI Act was a comprehensive legislation to regulate AI usage. Naturally it did impact the data protection scenario also.
It was one of the requirements of the EU-AI act (Article 27) that before deploying high Risk AI systems deployers shall conduct an “Impact Assessment”. This “Impact Assessment” was directed towards fundamental rights that the use of such systems may impact. This was called the FRIA or the Fundamental Rights Impact Assessment to be conducted on the first use of the system. This framework may also impact the implementation of DORA. (EU Digital Operations Resilience Act).
Now the European Center for Not for Profit Law (ECNL) and Danish Institute for Human Rights (DIHR) have released a template for FRIA which is an interesting document to study. This may be compared with the DGPSI AI which has six principles and nine implementation specifications for AI deployers.
A Copy of the report is available here
In the DGPSI AI, one of the principles is that
“The responsibility of the AI deployer as a “Fiduciary” shall ensure all measures to safeguard the society from any adverse effect arising out of the use of the AI.”
Further, the implementation specifications include
The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process.
The DPIA shall be augmented with periodical external Data Auditor’s evaluation at least once a year.
The FRIA, suggested by the ECNL is considered similar to a DPIA and highlights the requirement to document the impact on the fundamental rights. It could be a guideline for conducting DPIAs under GDPR.
While DGPSI-AI focusses more on the compliance to DPDPA 2023 while FRIA tries to focus directly o the “Fundamental Rights”.
A template released in this context lists 51 fundamental Rights over which the assessment.
Following questions are suggested to be explored across this canvas.
1.How can the deployment of the AI system negatively affect people?
2.What drives these negative outcomes?
3.Which people/groups are negatively affected in this scenario?
4.What is the fundamental right negatively impacted?
5.What is the extent of the interference with the respective fundamental right?
6.How would you assess the likelihood of this interference? Explain your answers)
7.What is the geographic scope of the negative impact ?
8.Within the group identified, how many people will be negatively impacted?
9..Amongst this group, are there people in situation of vulnerability such as children, elderly, people with disabilities, low-income households, racial/ethnic minorities? (Explain your answers)
10. What is the gravity of the harm that affected people might experience?
11. What is the irreversibility of the harm
12.What is the level of prioritisation for this impact?
13.List the actions already in place that can prevent and mitigate the negative impact
14. List all the additional actions that should be taken to prevent and mitigate the negative impacts.
The template suggests allocation of scores for each fundamental right and their aggregation.
While we appreciate the comprehensive nature of the guideline, there is one fundamental principle that the regulators and advisors should follow. It is that any Compliance has to be designed to be as simple as possible. Making it complicated will only add a disproportionate cost or rejection.
The DGPSI-AI has tried to avoid this trap and leave it to the auditor to decide what risks are to be factored. Again by aligning the DGPSI framework to the “Compliance of DPDPA”, it has been ensured that business entities need not be experts on the interpretation of the constitution on fundamental rights but focus on the law which has been passed by the Parliament.
Making an assessment of the impact of an AI on a canvas of 51 fundamental rights is a humongous task and perhaps not practical. India has only 6 fundamental rights and out of which “Right to Privacy” which is carved out of “Right to Life and Liberty” is relevant for DPDPA Compliance and is included in DGPSI.
In the DGPSI-GDPR framework, there are the following four implementation specifications to address the issue of Ai deployment.
- Organization shall establish an appropriate policy to identify the AI risks with its own processing and processing with data processors, and obtain appropriate assurances
- Organization shall establish an appropriate policy to ensure that there is an accountable human handler in the organization ,in the AI vendor organization and AI deploying data processor organization
- Organization shall adopt AI deployment based on specific documented requirement.
- All AI usage shall be supported by an appropriate guardrails to mitigate the risks.
While we accept that the four specifications of DGPSI-GDPR addressing AI requirements may not be as comprehensive as the ECNL framework, we hope it is more practical.
I would be happy to receive the views and comments of the experts.
Naavi






