DGPSI-AI Principles..A summary

DGPSI-AI is a  Comprehensive Framework for AI Governance and DPDPA Compliance developed by Naavi .

This is  a forward-looking initiative to bridge the gap between artificial intelligence and data protection containing a series of principles . This framework is an extension of the existing Data Governance and Protection System of India (DGPSI) and is specifically designed to guide organizations in ensuring their use of AI is compliant with India’s Digital Personal Data Protection Act (DPDPA).

The DGPSI-AI framework is built upon six core principles: “Unknown Risk is Significant Risk,” Accountability, Explainability, Responsibility, Security, and Ethics. Together, these principles aim to provide a robust structure for the ethical and lawful deployment of AI systems.

Principle 1: “Unknown Risk is Significant Risk”

The foundational principle of the DGPSI-AI framework posits that any process involving AI—defined as an autonomous software capable of modifying its behavior without human intervention— inherently carries an “unknown risk.” This is because AI, particularly self-correcting software, can evolve in unpredictable ways, potentially leading to unforeseen and catastrophic outcomes. Unlike traditional software where risks are generally identifiable and manageable through testing, AI’s ability to autonomously alter its code introduces a level of uncertainty.

This principle suggests that any organization deploying AI should be automatically classified as a “Significant Data Fiduciary” under the DPDPA. This classification mandates more stringent compliance requirements, including the necessity of conducting Data Protection Impact Assessments (DPIAs), appointing a Data Protection Officer (DPO), and undergoing data audits. Downgrading this risk classification would require substantial documentation and explicit assurances from the AI developer.

Principle 2: Accountability

The principle of Accountability is central to AI governance. Within the DGPSI-AI framework, it establishes that autonomous AI systems must be accountable to the Data Fiduciary. Since an AI algorithm cannot be held legally responsible as a juridical entity, the accountability rests with the human element behind it. This could be an individual or a corporate entity, aligning with Section 11 of the Information Technology Act, 2000, which holds the person causing an automated system to act responsible for its actions.

Implementation of this principle involves two key actions. Firstly, a mandated digital signature from the developer should be embedded in the AI’s code, creating a “chain of AI ownership.” Secondly, for every AI system, a designated human “Handler” or “AI Owner” must be disclosed. This ensures that while for external purposes, there is a clearly identified responsible party (the DPO or a compliance officer),  internally, a specific process owner is assigned accountability.

Principle 3: Explainability

The third principle, Explainability, addresses the “black box” nature of many AI systems. It requires that organizations can provide clear and accessible reasons for the outputs generated by their AI. This is crucial for building trust and is a key component of transparency, a fundamental tenet of data protection law. The ability to explain an AI’s decision-making process is vital for the Data Fiduciary to fulfill its obligations to data principals.

Explainability is not only about transparency but also about risk management. If a Data Fiduciary cannot explain how an AI functions, the full accountability for its consequences may shift to the developer or licensor, who would then be considered a Joint Data Fiduciary. Real-world applications of explainability are seen in financial services for loan decisions, in healthcare for diagnoses, and in human resources for recruitment, ensuring that decisions are fair, unbiased, and justifiable.

Principle 4: Responsibility

The principle of “Responsible AI Usage” emphasizes that the deployment of AI should primarily benefit the data principals and not solely serve the profit motives of the Data Fiduciary. This aligns with international principles such as the OECD’s “Inclusive Growth” and UNESCO’s principles of “necessity and proportionality.” The use of AI should be justified by the value it adds over non-AI processes, and this justification must be documented.

Organizations are expected to create an “AI use justification document” that outlines the purpose of the AI, a cost-benefit analysis comparing it to traditional methods, and evidence that the value proposition could not be achieved otherwise. This ensures that AI is not adopted merely for fashion but for genuine business and societal needs, with the welfare of the data principal at the forefront.

Principle 5: Security

Security within the DGPSI-AI framework extends beyond typical cybersecurity to encompass the prevention of harm caused by the AI algorithm itself. The principle recognizes three main areas of risk to the data principal: potential physical harm, mental manipulation through “dark patterns,” and deeper neurological manipulation.

Given the “unknown” nature of AI risks, the Data Fiduciary must assume legal liability for any consequences. This necessitates obtaining assurances from the developer regarding rigorous testing and augmenting this with a “Liability” admission clause, supported by adequate insurance. The framework mandates that every AI algorithm should be insured against causing physical, mental, or neurological damage to users.

Principle 6: Ethics

The final principle of Ethics urges organizations to operate beyond the strict confines of written law and consider the broader societal good. This is particularly relevant in the current landscape where specific AI legislation is still developing. The DPDPA’s definition of a “Fiduciary” already implies an ethical duty to protect the interests of the data principal, and this principle extends that duty to AI processes.

Ethical considerations are to be identified through a thorough risk assessment process. The framework suggests that “Post Market Monitoring,” similar to the EU AI Act, can be an ethical practice where the impact of AI on data principals is monitored even after the initial processing is complete. Another ethical consideration is the concept of “Data Fading,” where the AI could, for instance, ask for consent at the end of each session to retain the learnings from that interaction, treating immediate processing and future reuse as distinct purposes requiring separate consent.

In conclusion, the six principles of DGPSI-AI provide a comprehensive governance model that appears to encompass the core tenets of major international AI frameworks, including those from the OECD, UNESCO, and the EU AI Act. As these principles are further developed and refined through feedback, they stand to offer a crucial roadmap for organizations navigating the complex intersection of AI innovation and data protection in India.

Posted in Privacy | Leave a comment

DGPSI-AI Principle-6: Ethics

The first principle we hear whenever we speak about AI Governance principles is “Ethical and Responsible AI”.

We have explored different dimensions of Responsible AI in the form of assuming risk responsibility, accountability, explainability. “Ethics” become relevant when there is no specific law to follow. For some more time, India will have to work without a specific law on AI and has to manage with  a Jurisprudential outlook on some principles of ITA 2000 and DPDPA.

Hence “Ethical Approach” which goes beyond the written law and addresses what is good for the society is relevant for India. When we alluded to “Neuro Manipulation Prevention” as an objective of “Security of AI” in the previous article.

The “Ethical Principle” basically urges an organization to go beyond the written law and address other issues of the society. The definition of “”Fiduciary” under DPDPA requires an entity to assume a “Duty” to manage the Personal data in such a manner that it secures the interest of the data principal. Hence “Ethics” is already embedded in the DGPSI-Full /Lite principles. The extension to the AI Process is therefore automatic.

The “Ethical” requirements can only be identified through a “Risk Assessment Process” where some risks may appear stretching the law a little far. Hence when an auditor prepares a “Gap Assessment” based on the Model Implementation Specifications and the auditee data fiduciary absorbs certain risks and creates an “Adapted Implementation Specification” for the auditor to probe with evidence through a “Deviation Justification Document”, the difference between what is “Ethical” and what is “Statutorily Mandatory” is flagged.

Similarly when an assessment is made on an AI, the first Gap Assessment may follow the principle of ethics with “utmost care”. Subsequently  a “Deviation Justification Document for AI Deployment” may be prepared to adapt the model implementation specifications of  DGPSI AI with modifications guided by the “Risk Absorption” decisions of the management.

The “Post Market Monitoring” referred to in the EU AI Act is a principle that can be used by Data Fiduciaries  following ethical considerations when they can monitor the impact of the AI on the Data Principals even after the purpose of processing is deemed to be complete which may require a review of the data retention.  Hence when an AI is allowed to store personal data after processing, the data fiduciary shall monitor the requirement  of  data retention at periodical intervals and purge them when the need no longer  exists.

We have earlier discussed the concept of  “Data Fading” principles  for a developer. This  can be  another “Ethical Requirement” that can be  adopted when the AI deployer continues the training of the model with internal data . Alternatively, at the end of each user session a question can be posed

“Can the learnings of this session be retained for future use or deleted”.

This would treat the immediate processing as a different process compared to the “Storing” and “Re-use of the stored data” as different purposes for which a new consent is used. This can be done periodically or at the end of every process cycle.

It appears that  the six principles of DGPSI-AI could suffice to cover all the AI Governance principles covered under OECD principles, UNESCO Principles, EU-AI act Principles  as well as the Australian “Model Contractual Clauses principles”. We can continue to explore the Model Implementation specifications under these principles to complete the development  of DGPSI-AI framework.

Naavi

(P.S: Readers may appreciate that the concepts of DGPSI-AI are under development  and requirements of refinements are recognized. Your comments will help us in the process)

Posted in Privacy | Leave a comment

DGPSI-AI Principle 5: Security

So  far we have discussed four principles of DGPSI-AI, a framework for compliance of DPDPA by a AI Deployer. We will discuss the responsibilities of the Developer subsequently.

They are

a) “Unknown Risk is Significant Risk”

b) Accountability

c) Explainability

d)Responsibility

The principles are basically discussed in the AI Deployer’s scenario and are an extension of the DGPSI -Full framework.

To summarize, what we have discussed,

the principle of unknown risk is significant  Risk suggests that an organization deploying AI should consider itself handling a “Significant Risk” and therefore the AI-Process should be considered as a “Significant Data Fiduciary” process requiring a “DPIA”, “DPO” and “Data Auditor” to meet the compliance requirements. The principle of “Accountability”  extends the first principle by requiring designation of a “AI Handler” as a human responsible for the consequences of an AI. The “Explainability” principle further requires that a deployer documents the functioning of the AI process with  respect to how the output is achieved. Since the functioning of the AI is determined by the developer who may hold back the code with himself, the fulfillment of “Explainability” obligation between the Data fiduciary and the data principal needs support of a proper contractual document between the Data Fiduciary and the supplier of the AI  tool. The fourth principle of “Responsible deployment of AI” requires a justification document on the necessity and proportionality of the value addition that the Data Fiduciary intends to achieve  in deploying the AI.

The next principle which we need to adopt as part of DGPSI-AI is the “Security”. In this context what is meant by security is that AI as an algorithm shall not create harm to the data principal whose data is processed. The classification system adopted by EU-AI is solely based on the “Anticipated Risk to the Data Principal”. The Risk that we need to recognize to the data principal is the potential physical harm if any, potential mental manipulation in terms of what we normally recognize as “Dark Patterns” and thirdly the deeper manipulation of the human brain which is part of the Neuro Rights regulation.

Physical harm is predominant when AI is used in robots both humanoid robots and industrial robots. Since humanoid robots are made of steel in most cases, the physical strength of the device is strong enough to create significant physical damage if the humanoid robot misbehaves.

We can recall how one Chess Robot crushed the finger of an opponent player who made a wrong move. Similarly there are instances of industrial robots dropping material on a worker to crush him to death , going rogue physically , the BINA 48 episode desiring a nuclear attack and taking over the world,.

Thus AI has to be secured  both for physical security, digital security and Neuro Security. However given the fact that AI Risk is “UNKNOWN”, the management of physical security arising out of deployment of AI is also constrained by the unknown nature  of the risk.

From the compliance point of view, the Data Fiduciary has to assume legal liability for the consequences, take appropriate assurances from the developer for successful testing at the developmental stage and hope so that he can claim “Reasonable Security”.

Identification and acknowledgement of physical risks, Dark Patterns and Neuro Manipulation Risk is considered part of the disclosure of a Privacy Notice involving AI usage under the DGPSI-AI principle. This is more like the “Statutory Warning” necessary but not sufficient. Hence it is augmented by a “Liability” admission clause supported by a suitable insurance for liability.

In other words, every AI algorithm shall be insured against causing any damages to the user either physically or mentally or neurologically. Watch out for a list of Implementation specifications further expanding on the principles.

Naavi

Posted in Privacy | Leave a comment

DGPSI-AI principle-4: Responsibility

The first three principles of DGPSI-AI namely the “Unknown Risk”, “Accountability” and “Explainability have been discussed in the previous posts.

Now we shall take up the principle of “Responsible AI Usage” which reflects the OECD principle of “Inclusive Growth” as well as the UNESCO principle of “necessity and proportionality”.

The “Responsible Use” means that the Data Fiduciary shall ensure that the use of AI shall not cause unnecessary harm to the data principles. Usage of AI should be more for the benefit of the Data Principles  and not for profit making by the Data Fiduciary.

DGPSI -Full framework has a suggestion of implementation specifications namely the  “Monetization Policy” (MIS 13) read with “Data Valuation Policy” (MIS 9).  According to these, processing of personal data has to recognize the value of data before and after processing. The monetization policy has to explain what is being monetized and how.

The UNESCO principle of Responsibility states “

The use of AI systems shall be governed by the principle of ‘necessity and proportionality’. AI systems, in particular, should not be used for social scoring or mass surveillance purposes;”

Here the principle takes into account the sensitivity of the processed data. In the monetization policy of DGPSI-Full, processes such as social scoring or surveillance has compliance implications as well as a “Change of Value” of data during processing which should reflect the higher value of the sensitive personal data assuming that it was duly permitted by the data principle.

Hence the DGPSI-AI can absorb this UNESCO principle by stating

” The value addition to the AI processing shall be sufficient enough for the Data Fiduciary against  a non AI process and shall be supported by appropriate consent.

Such a principle will also meet the  ethical obligations such as that the primary benefit of the AI use should flow to the Data Principals.

This “Value addition Justification Principle” means that  if the Data Fiduciary has means of achieving his data processing economically through a non-AI process, the need to absorb the “Unknown Risk” of the AI may be  not necessary.

Use of AI should not be just for the sake of fashion and should be justified by an “AI use justification document”.  This document  should specify the purpose  of use of the AI, and the value proposition which cannot be otherwise achieved at the same cost.

Such a document  shall contain

  1. Purpose of AI Use

    • Clear articulation of why AI is necessary for the specific data processing objective

    • Identification of the problem or opportunity that AI addresses

  2. Value Proposition Analysis

    • Quantified benefits that AI processing provides over traditional methods

    • Cost-benefit analysis comparing AI versus non-AI approaches

    • Demonstration that equivalent value cannot be achieved at the same cost through conventional processing

  3. Necessity Assessment

    • Evidence that the organization lacks viable non-AI alternatives for achieving the same processing objectives

    • Economic justification for absorbing the “Unknown Risk” inherent in AI systems

      Organizations implementing this principle should:

      1. Conduct Value Addition Assessments before implementing AI systems

      2. Document Justifications for choosing AI over traditional processing methods

      3. Regularly Review AI implementations to ensure continued justification

      4. Monitor Impact on data principals to prevent unintended harm

      5. Maintain Transparency about monetization and value creation from personal data processing

      The Responsible AI Usage principle thus ensures that AI deployment serves genuine business and social needs rather than merely following technological trends, while maintaining focus on data principal welfare as the primary justification for processing personal data through potentially risky autonomous systems.

      The  “Responsibility” Principle at the developer’s end may have a slightly different perspective since it has to incorporate fairness in the development  and testing process. Hence there could be some difference in the application of this principle between the Developer and the Deployer.

Naavi

Posted in Privacy | Leave a comment

Is there a Scam at NIXI?

While following the issue of arbitrary taking over notice issued to dpdpa.in, I have come across additional information that there is a largescale mismanagement at NIXI and some officials are indulging in issuing false notices in the names of other Government departments and later selling the domain names.

There is a need to get more information on this and whether any corruption is involved.

I wish some Delhi based public interest oriented lawyers can take up this issue to clean up NIXI if there are any malpractices.

I request CAG to take note investigate. Subsequently CVC may also conduct its own investigation.

Naavi

Posted in Cyber Law | Leave a comment

Notice to NIXI on attempted Illegal acquisition of dpdpa.in domain name

I am reproducing below a notice being sent to NIXI in response to their notice related to the acquisition of the domain name dpdpa.in. The notice was sent by Ujvala Consultants Pvt Ltd which is the registrant.

This is placed in public domain so that any body else who is affected by similar arbitrary acquisition of dot in domain names may also take appropriate action.

Quote:

To

.IN Registry

National Internet Exchange of India (NIXI)

B-901,9th Floor Tower B, World Trade Center

Nauroji Nagar

New Delhi 110029

Subject: Objection to your Undigitally signed E-Mail dated August 1, 2025 to naavi2011@gmail.com, regarding the domain name dpdpa.in

Sirs

I am Vijayashankar Nagaraja Rao, popularly known as Naavi, a resident of Bengaluru at the above address and Managing Director of Ujvala Consultants Pvt Ltd. I  refer to your above E-Mail and regret to note that you have unilaterally notified that the domain dpdpa.in registered in our name is placed under server lock and you would initiate transfer of the domain thereafter.

In the email, you have also indicated that “Govt of India” desires to get the domain name registered for itself. However you have not given any copy of any gazette notification in this regard. Hence your claim is completely arbitrary, unjustified and illegal.  It  violates the Constitutional Rights under Article 300A of the Indian Constitution.

I strongly object to this unilateral, arbitrary and illegal decision and give notice that I intend challenging this in an appropriate Court unless the order is withdrawn immediately.

I also give you notice that I refuse any arbitration proceedings under clause 11 of the Terms and Conditions cited by you which is again unsigned and unauthenticated, since you have a vested interest in the arbitration proceedings and the dispute is not related to registration of domain name raised by a member of the public.  I am an individual resident of Bengaluru and the impact of your decision is on me at my place while you are an agency of the Government of India with all India presence and hence I reserve the right to file Court proceedings at Bengaluru and the Jurisdiction for the litigation is considered as Bengaluru.

I reserve the right to claim compensation for any damage caused due to this arbitrary and illegal decision of the registry which I may specify at the appropriate time.

It is possible that some body else may also consider this as a “Public Interest Litigation” since this move directly introduces an uncertainty in the dot in domain name registration if the Government acts in such arbitrary manner. I leave it to such persons to proceed separately.

I note that there is no information on whether you have moved similar proceedings against all domain name extensions of “dpdpa” such as dpdpa.com, dpdpa.net, dpdpa.co.in and other 315 country codes as well as confusing domain extensions such as dpdpa-india or any other typographical configurations such as dpdp-act.in etc., dpdp4india.in . Hence your action is discriminatory and points to a possible ulterior motive.

The domain name system provides for registration of dpdpa.gov.in which is reserved for Government organizations and it was your duty to suggest to Government of India if there was a request that they should operate on dpdpa.gov.in.

The proposed move of NIXI to usurp the domain dpdpa.in is similar to the acquisition of property of a citizen and conflicts with the provisions of the Indian Constitution under Article 300A amongst others and hence is a subject matter for litigation as a violation of the Constitutional rights of a citizen of India. Th domain name cannot be arbitrarily acquired without proper reason and offer of adequate compensation.

I am an observer of Domain Name related developments in the globe even before NIXI was born and has always considered the Government of India a friend. However this move of NIXI appears to me like a betrayal similar to what Donald Trump exhibited against India.

It is my desire that wiser counsel prevails and NIXI drops this notice and advises the Government to register www.dpdpa.gov.in

Way back around 2005, I have suggested a system called “Look Alikes”, to ensure co-existence of similar looking domain names with appropriate mutual disclosures. The concept is still visible on www.looaklikes.in . I am myself living with  such a conflict between my primary site www.naavi.org and www.naavi.com which is presently owned by an Australian Company and several other similar names including www.navi.co.in which is a phonetically similar domain name. I will provide similar disclosure on the site www.dpdpa.in if Government decides to open a separate website www.dpdpa.gov.in.

If however NIXI does not recognize the rights of Indian citizens to register an available domain name and use it, NIXI will be responsible for diverting registrations in the dot IN domain extension to dot Com extension. This will be an adverse decision on the Indian national interests which would be more to the liking of the opposition political parties and not the ruling nationalistic Government of Mr  Narendra Modi.

I consider that it is possible that the current move is deliberately instigated by some wrong advise. If so, this would be an indication of corruption at the Government which will be subject matter of an investigation which will include a report on how many domain names of criminals are being accommodated by NIXI even after they have been notified either by activists like us or by the Home Ministry without taking any action and the possible consideration for the same.

I also  refer to the news reports in 2005 such as the report in times of India: “Domain Squatters threat to ‘.in’”, many of which were registrations in violations of the dot in registration but was ignored by NIXI.

I have myself urged NIXI to protect domain name owners in the Net4India fiasco which you have failed to do. Other agencies have pointed out the large number of registrations by foreign firms even at the beginning of the dot in domain names which was also reported in naavi.org at that time.

NIXI has failed to fulfil its duty to the public by taking action against such fraudulent domain name registrations or closure of Net4India.com and become vicariously liable though no proceedings have so far been initiated.

A proper investigation may reveal many other instances of negligence and deliberate mismanagement of the dot in domain registration system which are being left unpursued by activists because NIXI is considered part of the nationalistic movement of bringing domain registrations to the Indian jurisdictions though commercially it is still beneficial to register dot com instead of dot in domain names.

It is possible that some body may now be interested in diverting dot in registrations back to dot com space for a reason known to NIXI itself.

I draw  attention to your notice in which you have cited the clause 12 of the terms and conditions  as follows.

12. Reservation of Rights for the .IN Registry:

The .IN Registry reserves the right to instruct its Registry Services Provider to deny, cancel, transfer or otherwise make unavailable any registration that it deems necessary or place any domain name(s) on registry lock and/or put a domain name on hold in its discretion:

 (1) to protect the integrity and stability of .IN Registry;

(2) to comply with any applicable laws, Indian government rules or requirements, requests of law enforcement, in compliance with any dispute resolution process;

(3) to avoid any liability, civil or criminal, on the part of the .IN Registry, as well as its affiliates, subsidiaries, officers, directors, representatives and employees;

(4) for violations of this Agreement; or

(5) to correct mistakes made by the Registry or any Registrar in connection with a domain name registration. The Registry also reserves the right to freeze a domain name during resolution of a dispute pending before arbitrator(s) appointed under Registry ís Domain Name Dispute Resolution Policy and/or a court of competent jurisdiction.

I donot see any of the above five reasons justifying your action related to dpdpa.in. I demand that you present appropriate authenticated evidences before you take any penal action and prove them in a court of law  along with a copy of the specific Gazette Notification for acquiring the domain name property dpdpa.in from Ujvala Consultants Pvt Ltd.

It is a very unproductive exercise for professionals like us to pursue such disputes but in the interest of  ensuring that institutions of the Government function properly, it is also our duty. Hence with lot of discomfort and pain, I am sending this notice. This is similar to the Tax terrorism faced by honest citizens, undesirable but without an option.

I request NIXI to look at the larger implications of such acquisition of domain name,  the adverse effect it would have on future domain name registration in the dot in domain extension and the discredit you are bringing to Mr Modi’s Government  with this “Emergency Like” operation and withdraw this notice forthwith.

In the larger interest of the Indian Citizens, I will be placing this notice for public information along with your response, positive or negative.

Yours sincerely

Na.Vijayashankar

Managing Director

Unquote:

 

Posted in Cyber Law | Leave a comment