DGPSI-AI is a Comprehensive Framework for AI Governance and DPDPA Compliance developed by Naavi .
This is a forward-looking initiative to bridge the gap between artificial intelligence and data protection containing a series of principles . This framework is an extension of the existing Data Governance and Protection System of India (DGPSI) and is specifically designed to guide organizations in ensuring their use of AI is compliant with India’s Digital Personal Data Protection Act (DPDPA).
The DGPSI-AI framework is built upon six core principles: “Unknown Risk is Significant Risk,” Accountability, Explainability, Responsibility, Security, and Ethics. Together, these principles aim to provide a robust structure for the ethical and lawful deployment of AI systems.
Principle 1: “Unknown Risk is Significant Risk”
The foundational principle of the DGPSI-AI framework posits that any process involving AI—defined as an autonomous software capable of modifying its behavior without human intervention— inherently carries an “unknown risk.” This is because AI, particularly self-correcting software, can evolve in unpredictable ways, potentially leading to unforeseen and catastrophic outcomes. Unlike traditional software where risks are generally identifiable and manageable through testing, AI’s ability to autonomously alter its code introduces a level of uncertainty.
This principle suggests that any organization deploying AI should be automatically classified as a “Significant Data Fiduciary” under the DPDPA. This classification mandates more stringent compliance requirements, including the necessity of conducting Data Protection Impact Assessments (DPIAs), appointing a Data Protection Officer (DPO), and undergoing data audits. Downgrading this risk classification would require substantial documentation and explicit assurances from the AI developer.
Principle 2: Accountability
The principle of Accountability is central to AI governance. Within the DGPSI-AI framework, it establishes that autonomous AI systems must be accountable to the Data Fiduciary. Since an AI algorithm cannot be held legally responsible as a juridical entity, the accountability rests with the human element behind it. This could be an individual or a corporate entity, aligning with Section 11 of the Information Technology Act, 2000, which holds the person causing an automated system to act responsible for its actions.
Implementation of this principle involves two key actions. Firstly, a mandated digital signature from the developer should be embedded in the AI’s code, creating a “chain of AI ownership.” Secondly, for every AI system, a designated human “Handler” or “AI Owner” must be disclosed. This ensures that while for external purposes, there is a clearly identified responsible party (the DPO or a compliance officer), internally, a specific process owner is assigned accountability.
Principle 3: Explainability
The third principle, Explainability, addresses the “black box” nature of many AI systems. It requires that organizations can provide clear and accessible reasons for the outputs generated by their AI. This is crucial for building trust and is a key component of transparency, a fundamental tenet of data protection law. The ability to explain an AI’s decision-making process is vital for the Data Fiduciary to fulfill its obligations to data principals.
Explainability is not only about transparency but also about risk management. If a Data Fiduciary cannot explain how an AI functions, the full accountability for its consequences may shift to the developer or licensor, who would then be considered a Joint Data Fiduciary. Real-world applications of explainability are seen in financial services for loan decisions, in healthcare for diagnoses, and in human resources for recruitment, ensuring that decisions are fair, unbiased, and justifiable.
Principle 4: Responsibility
The principle of “Responsible AI Usage” emphasizes that the deployment of AI should primarily benefit the data principals and not solely serve the profit motives of the Data Fiduciary. This aligns with international principles such as the OECD’s “Inclusive Growth” and UNESCO’s principles of “necessity and proportionality.” The use of AI should be justified by the value it adds over non-AI processes, and this justification must be documented.
Organizations are expected to create an “AI use justification document” that outlines the purpose of the AI, a cost-benefit analysis comparing it to traditional methods, and evidence that the value proposition could not be achieved otherwise. This ensures that AI is not adopted merely for fashion but for genuine business and societal needs, with the welfare of the data principal at the forefront.
Principle 5: Security
Security within the DGPSI-AI framework extends beyond typical cybersecurity to encompass the prevention of harm caused by the AI algorithm itself. The principle recognizes three main areas of risk to the data principal: potential physical harm, mental manipulation through “dark patterns,” and deeper neurological manipulation.
Given the “unknown” nature of AI risks, the Data Fiduciary must assume legal liability for any consequences. This necessitates obtaining assurances from the developer regarding rigorous testing and augmenting this with a “Liability” admission clause, supported by adequate insurance. The framework mandates that every AI algorithm should be insured against causing physical, mental, or neurological damage to users.
Principle 6: Ethics
The final principle of Ethics urges organizations to operate beyond the strict confines of written law and consider the broader societal good. This is particularly relevant in the current landscape where specific AI legislation is still developing. The DPDPA’s definition of a “Fiduciary” already implies an ethical duty to protect the interests of the data principal, and this principle extends that duty to AI processes.
Ethical considerations are to be identified through a thorough risk assessment process. The framework suggests that “Post Market Monitoring,” similar to the EU AI Act, can be an ethical practice where the impact of AI on data principals is monitored even after the initial processing is complete. Another ethical consideration is the concept of “Data Fading,” where the AI could, for instance, ask for consent at the end of each session to retain the learnings from that interaction, treating immediate processing and future reuse as distinct purposes requiring separate consent.
In conclusion, the six principles of DGPSI-AI provide a comprehensive governance model that appears to encompass the core tenets of major international AI frameworks, including those from the OECD, UNESCO, and the EU AI Act. As these principles are further developed and refined through feedback, they stand to offer a crucial roadmap for organizations navigating the complex intersection of AI innovation and data protection in India.