DGPSI-AI is the extension of the one and only framework for DPDPA Compliance namely DGPSI. This extension is to address the issue of AI Deployment by a Data Fiduciary and preserving DPDPA compliance in such a scenario.
The 9 implementation specifications are listed here and it will be expanded through videos of Naavi Academy.
Kindly note that these specifications are the first version and could be fine tuned as we go through IDPS 2025 and gather the views of other professionals.
MIS-AI No |
Specification | Associated Principle |
1 | The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software covering the following aspects, and also obtaining a confirmation from the vendor that the software can be classified as AI based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process | Unknown Risk |
2 | The DPIA shall be augmented with periodical external Data Auditor’s evaluation at least once a year. | Unknown Risk |
3 | Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process. | Unknown Risk |
4 | Designate a specific human handler on the part of Deployer-Data Fiduciary to be accountable for the consequences of the use of AI in personal data processing. By default the DPO/Compliance officer will be accountable. However, the “Process Owner” envisaged under the DGPSI framework and Process based compliance could be an alternate designate. | Accountability |
5 | Document the human handler for the AI on behalf of the licensor through the licensing contract and if the developer has hardcoded the accountable person for the AI in the Code, the same may be recorded in the licensing contract. | Accountability |
6 | The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals. | Explainability |
7 | The deployer shall develop a “AI Justification Document” before adopting an AI led process for processing personal data coming under the jurisdiction of DPDPA justifying the use of AI and exposing the data principals to the unknown risks from technical and economical perspectives. | Responsibility |
8 |
Document an assurance from the licensor that 1. the AI software is adequately tested at their end for vulnerabilities, preferably from tha third party auditor. The document should state that the “When deployed for data processing, the AI Software is reasonably secured against vulnerabilities that may adversely affect the confidentiality, integrity and availability of data and the Privacy principles where the data processed is “Personally identifiable data”. 2. The document shall also mention that sufficient guard rails exist to protect the Data Principals whose data may be processed by the deployer. 3. The document shall also mention that the AI has been tested and is free from any malware that may affect other systems or data owners. |
Security |
9 |
The Deployer of an AI shall take all such measures that are essential to ensure that the AI does not harm the society at large. In particular the following documentation of assurances from the licensor is recommended. 1.The AI comes with an tamper-proof Kill switch. 2.In the case of Humanoid Robots and industrial robots, the Kill Switch shall be controlled separately from the intelligence imparted to the device so that the device intelligence cannot take over the operation of the Kill Switch. 3.Where the kill switch is attempted to be accessed by the device without human intervention, a self destruction instruction shall be built in. 4.Cyborgs and Sentient algorithms are a risk to the society and shall be classified as Critical risks and regulated more strictly than other AI, through an express approval at the highest management level in the data fiduciary. 5.Data used for learning and modification of future decisions of the AI shall be imparted a time sensitive weightage with a “Fading memory” parameter assigned to the age of the observation.
|
Ethics |
Kindly await videos explaining each of the implementation specifications.
The Six principles which support these implementation specifications are as follows:
Naavi