DGPSI-AI Principle-6: Ethics

The first principle we hear whenever we speak about AI Governance principles is “Ethical and Responsible AI”.

We have explored different dimensions of Responsible AI in the form of assuming risk responsibility, accountability, explainability. “Ethics” become relevant when there is no specific law to follow. For some more time, India will have to work without a specific law on AI and has to manage with  a Jurisprudential outlook on some principles of ITA 2000 and DPDPA.

Hence “Ethical Approach” which goes beyond the written law and addresses what is good for the society is relevant for India. When we alluded to “Neuro Manipulation Prevention” as an objective of “Security of AI” in the previous article.

The “Ethical Principle” basically urges an organization to go beyond the written law and address other issues of the society. The definition of “”Fiduciary” under DPDPA requires an entity to assume a “Duty” to manage the Personal data in such a manner that it secures the interest of the data principal. Hence “Ethics” is already embedded in the DGPSI-Full /Lite principles. The extension to the AI Process is therefore automatic.

The “Ethical” requirements can only be identified through a “Risk Assessment Process” where some risks may appear stretching the law a little far. Hence when an auditor prepares a “Gap Assessment” based on the Model Implementation Specifications and the auditee data fiduciary absorbs certain risks and creates an “Adapted Implementation Specification” for the auditor to probe with evidence through a “Deviation Justification Document”, the difference between what is “Ethical” and what is “Statutorily Mandatory” is flagged.

Similarly when an assessment is made on an AI, the first Gap Assessment may follow the principle of ethics with “utmost care”. Subsequently  a “Deviation Justification Document for AI Deployment” may be prepared to adapt the model implementation specifications of  DGPSI AI with modifications guided by the “Risk Absorption” decisions of the management.

The “Post Market Monitoring” referred to in the EU AI Act is a principle that can be used by Data Fiduciaries  following ethical considerations when they can monitor the impact of the AI on the Data Principals even after the purpose of processing is deemed to be complete which may require a review of the data retention.  Hence when an AI is allowed to store personal data after processing, the data fiduciary shall monitor the requirement  of  data retention at periodical intervals and purge them when the need no longer  exists.

We have earlier discussed the concept of  “Data Fading” principles  for a developer. This  can be  another “Ethical Requirement” that can be  adopted when the AI deployer continues the training of the model with internal data . Alternatively, at the end of each user session a question can be posed

“Can the learnings of this session be retained for future use or deleted”.

This would treat the immediate processing as a different process compared to the “Storing” and “Re-use of the stored data” as different purposes for which a new consent is used. This can be done periodically or at the end of every process cycle.

It appears that  the six principles of DGPSI-AI could suffice to cover all the AI Governance principles covered under OECD principles, UNESCO Principles, EU-AI act Principles  as well as the Australian “Model Contractual Clauses principles”. We can continue to explore the Model Implementation specifications under these principles to complete the development  of DGPSI-AI framework.

Naavi

(P.S: Readers may appreciate that the concepts of DGPSI-AI are under development  and requirements of refinements are recognized. Your comments will help us in the process)

Posted in Privacy | Leave a comment

DGPSI-AI Principle 5: Security

So  far we have discussed four principles of DGPSI-AI, a framework for compliance of DPDPA by a AI Deployer. We will discuss the responsibilities of the Developer subsequently.

They are

a) “Unknown Risk is Significant Risk”

b) Accountability

c) Explainability

d)Responsibility

The principles are basically discussed in the AI Deployer’s scenario and are an extension of the DGPSI -Full framework.

To summarize, what we have discussed,

the principle of unknown risk is significant  Risk suggests that an organization deploying AI should consider itself handling a “Significant Risk” and therefore the AI-Process should be considered as a “Significant Data Fiduciary” process requiring a “DPIA”, “DPO” and “Data Auditor” to meet the compliance requirements. The principle of “Accountability”  extends the first principle by requiring designation of a “AI Handler” as a human responsible for the consequences of an AI. The “Explainability” principle further requires that a deployer documents the functioning of the AI process with  respect to how the output is achieved. Since the functioning of the AI is determined by the developer who may hold back the code with himself, the fulfillment of “Explainability” obligation between the Data fiduciary and the data principal needs support of a proper contractual document between the Data Fiduciary and the supplier of the AI  tool. The fourth principle of “Responsible deployment of AI” requires a justification document on the necessity and proportionality of the value addition that the Data Fiduciary intends to achieve  in deploying the AI.

The next principle which we need to adopt as part of DGPSI-AI is the “Security”. In this context what is meant by security is that AI as an algorithm shall not create harm to the data principal whose data is processed. The classification system adopted by EU-AI is solely based on the “Anticipated Risk to the Data Principal”. The Risk that we need to recognize to the data principal is the potential physical harm if any, potential mental manipulation in terms of what we normally recognize as “Dark Patterns” and thirdly the deeper manipulation of the human brain which is part of the Neuro Rights regulation.

Physical harm is predominant when AI is used in robots both humanoid robots and industrial robots. Since humanoid robots are made of steel in most cases, the physical strength of the device is strong enough to create significant physical damage if the humanoid robot misbehaves.

We can recall how one Chess Robot crushed the finger of an opponent player who made a wrong move. Similarly there are instances of industrial robots dropping material on a worker to crush him to death , going rogue physically , the BINA 48 episode desiring a nuclear attack and taking over the world,.

Thus AI has to be secured  both for physical security, digital security and Neuro Security. However given the fact that AI Risk is “UNKNOWN”, the management of physical security arising out of deployment of AI is also constrained by the unknown nature  of the risk.

From the compliance point of view, the Data Fiduciary has to assume legal liability for the consequences, take appropriate assurances from the developer for successful testing at the developmental stage and hope so that he can claim “Reasonable Security”.

Identification and acknowledgement of physical risks, Dark Patterns and Neuro Manipulation Risk is considered part of the disclosure of a Privacy Notice involving AI usage under the DGPSI-AI principle. This is more like the “Statutory Warning” necessary but not sufficient. Hence it is augmented by a “Liability” admission clause supported by a suitable insurance for liability.

In other words, every AI algorithm shall be insured against causing any damages to the user either physically or mentally or neurologically. Watch out for a list of Implementation specifications further expanding on the principles.

Naavi

Posted in Privacy | Leave a comment

DGPSI-AI principle-4: Responsibility

The first three principles of DGPSI-AI namely the “Unknown Risk”, “Accountability” and “Explainability have been discussed in the previous posts.

Now we shall take up the principle of “Responsible AI Usage” which reflects the OECD principle of “Inclusive Growth” as well as the UNESCO principle of “necessity and proportionality”.

The “Responsible Use” means that the Data Fiduciary shall ensure that the use of AI shall not cause unnecessary harm to the data principles. Usage of AI should be more for the benefit of the Data Principles  and not for profit making by the Data Fiduciary.

DGPSI -Full framework has a suggestion of implementation specifications namely the  “Monetization Policy” (MIS 13) read with “Data Valuation Policy” (MIS 9).  According to these, processing of personal data has to recognize the value of data before and after processing. The monetization policy has to explain what is being monetized and how.

The UNESCO principle of Responsibility states “

The use of AI systems shall be governed by the principle of ‘necessity and proportionality’. AI systems, in particular, should not be used for social scoring or mass surveillance purposes;”

Here the principle takes into account the sensitivity of the processed data. In the monetization policy of DGPSI-Full, processes such as social scoring or surveillance has compliance implications as well as a “Change of Value” of data during processing which should reflect the higher value of the sensitive personal data assuming that it was duly permitted by the data principle.

Hence the DGPSI-AI can absorb this UNESCO principle by stating

” The value addition to the AI processing shall be sufficient enough for the Data Fiduciary against  a non AI process and shall be supported by appropriate consent.

Such a principle will also meet the  ethical obligations such as that the primary benefit of the AI use should flow to the Data Principals.

This “Value addition Justification Principle” means that  if the Data Fiduciary has means of achieving his data processing economically through a non-AI process, the need to absorb the “Unknown Risk” of the AI may be  not necessary.

Use of AI should not be just for the sake of fashion and should be justified by an “AI use justification document”.  This document  should specify the purpose  of use of the AI, and the value proposition which cannot be otherwise achieved at the same cost.

Such a document  shall contain

  1. Purpose of AI Use

    • Clear articulation of why AI is necessary for the specific data processing objective

    • Identification of the problem or opportunity that AI addresses

  2. Value Proposition Analysis

    • Quantified benefits that AI processing provides over traditional methods

    • Cost-benefit analysis comparing AI versus non-AI approaches

    • Demonstration that equivalent value cannot be achieved at the same cost through conventional processing

  3. Necessity Assessment

    • Evidence that the organization lacks viable non-AI alternatives for achieving the same processing objectives

    • Economic justification for absorbing the “Unknown Risk” inherent in AI systems

      Organizations implementing this principle should:

      1. Conduct Value Addition Assessments before implementing AI systems

      2. Document Justifications for choosing AI over traditional processing methods

      3. Regularly Review AI implementations to ensure continued justification

      4. Monitor Impact on data principals to prevent unintended harm

      5. Maintain Transparency about monetization and value creation from personal data processing

      The Responsible AI Usage principle thus ensures that AI deployment serves genuine business and social needs rather than merely following technological trends, while maintaining focus on data principal welfare as the primary justification for processing personal data through potentially risky autonomous systems.

      The  “Responsibility” Principle at the developer’s end may have a slightly different perspective since it has to incorporate fairness in the development  and testing process. Hence there could be some difference in the application of this principle between the Developer and the Deployer.

Naavi

Posted in Privacy | Leave a comment

Is there a Scam at NIXI?

While following the issue of arbitrary taking over notice issued to dpdpa.in, I have come across additional information that there is a largescale mismanagement at NIXI and some officials are indulging in issuing false notices in the names of other Government departments and later selling the domain names.

There is a need to get more information on this and whether any corruption is involved.

I wish some Delhi based public interest oriented lawyers can take up this issue to clean up NIXI if there are any malpractices.

I request CAG to take note investigate. Subsequently CVC may also conduct its own investigation.

Naavi

Posted in Cyber Law | Leave a comment

Notice to NIXI on attempted Illegal acquisition of dpdpa.in domain name

I am reproducing below a notice being sent to NIXI in response to their notice related to the acquisition of the domain name dpdpa.in. The notice was sent by Ujvala Consultants Pvt Ltd which is the registrant.

This is placed in public domain so that any body else who is affected by similar arbitrary acquisition of dot in domain names may also take appropriate action.

Quote:

To

.IN Registry

National Internet Exchange of India (NIXI)

B-901,9th Floor Tower B, World Trade Center

Nauroji Nagar

New Delhi 110029

Subject: Objection to your Undigitally signed E-Mail dated August 1, 2025 to naavi2011@gmail.com, regarding the domain name dpdpa.in

Sirs

I am Vijayashankar Nagaraja Rao, popularly known as Naavi, a resident of Bengaluru at the above address and Managing Director of Ujvala Consultants Pvt Ltd. I  refer to your above E-Mail and regret to note that you have unilaterally notified that the domain dpdpa.in registered in our name is placed under server lock and you would initiate transfer of the domain thereafter.

In the email, you have also indicated that “Govt of India” desires to get the domain name registered for itself. However you have not given any copy of any gazette notification in this regard. Hence your claim is completely arbitrary, unjustified and illegal.  It  violates the Constitutional Rights under Article 300A of the Indian Constitution.

I strongly object to this unilateral, arbitrary and illegal decision and give notice that I intend challenging this in an appropriate Court unless the order is withdrawn immediately.

I also give you notice that I refuse any arbitration proceedings under clause 11 of the Terms and Conditions cited by you which is again unsigned and unauthenticated, since you have a vested interest in the arbitration proceedings and the dispute is not related to registration of domain name raised by a member of the public.  I am an individual resident of Bengaluru and the impact of your decision is on me at my place while you are an agency of the Government of India with all India presence and hence I reserve the right to file Court proceedings at Bengaluru and the Jurisdiction for the litigation is considered as Bengaluru.

I reserve the right to claim compensation for any damage caused due to this arbitrary and illegal decision of the registry which I may specify at the appropriate time.

It is possible that some body else may also consider this as a “Public Interest Litigation” since this move directly introduces an uncertainty in the dot in domain name registration if the Government acts in such arbitrary manner. I leave it to such persons to proceed separately.

I note that there is no information on whether you have moved similar proceedings against all domain name extensions of “dpdpa” such as dpdpa.com, dpdpa.net, dpdpa.co.in and other 315 country codes as well as confusing domain extensions such as dpdpa-india or any other typographical configurations such as dpdp-act.in etc., dpdp4india.in . Hence your action is discriminatory and points to a possible ulterior motive.

The domain name system provides for registration of dpdpa.gov.in which is reserved for Government organizations and it was your duty to suggest to Government of India if there was a request that they should operate on dpdpa.gov.in.

The proposed move of NIXI to usurp the domain dpdpa.in is similar to the acquisition of property of a citizen and conflicts with the provisions of the Indian Constitution under Article 300A amongst others and hence is a subject matter for litigation as a violation of the Constitutional rights of a citizen of India. Th domain name cannot be arbitrarily acquired without proper reason and offer of adequate compensation.

I am an observer of Domain Name related developments in the globe even before NIXI was born and has always considered the Government of India a friend. However this move of NIXI appears to me like a betrayal similar to what Donald Trump exhibited against India.

It is my desire that wiser counsel prevails and NIXI drops this notice and advises the Government to register www.dpdpa.gov.in

Way back around 2005, I have suggested a system called “Look Alikes”, to ensure co-existence of similar looking domain names with appropriate mutual disclosures. The concept is still visible on www.looaklikes.in . I am myself living with  such a conflict between my primary site www.naavi.org and www.naavi.com which is presently owned by an Australian Company and several other similar names including www.navi.co.in which is a phonetically similar domain name. I will provide similar disclosure on the site www.dpdpa.in if Government decides to open a separate website www.dpdpa.gov.in.

If however NIXI does not recognize the rights of Indian citizens to register an available domain name and use it, NIXI will be responsible for diverting registrations in the dot IN domain extension to dot Com extension. This will be an adverse decision on the Indian national interests which would be more to the liking of the opposition political parties and not the ruling nationalistic Government of Mr  Narendra Modi.

I consider that it is possible that the current move is deliberately instigated by some wrong advise. If so, this would be an indication of corruption at the Government which will be subject matter of an investigation which will include a report on how many domain names of criminals are being accommodated by NIXI even after they have been notified either by activists like us or by the Home Ministry without taking any action and the possible consideration for the same.

I also  refer to the news reports in 2005 such as the report in times of India: “Domain Squatters threat to ‘.in’”, many of which were registrations in violations of the dot in registration but was ignored by NIXI.

I have myself urged NIXI to protect domain name owners in the Net4India fiasco which you have failed to do. Other agencies have pointed out the large number of registrations by foreign firms even at the beginning of the dot in domain names which was also reported in naavi.org at that time.

NIXI has failed to fulfil its duty to the public by taking action against such fraudulent domain name registrations or closure of Net4India.com and become vicariously liable though no proceedings have so far been initiated.

A proper investigation may reveal many other instances of negligence and deliberate mismanagement of the dot in domain registration system which are being left unpursued by activists because NIXI is considered part of the nationalistic movement of bringing domain registrations to the Indian jurisdictions though commercially it is still beneficial to register dot com instead of dot in domain names.

It is possible that some body may now be interested in diverting dot in registrations back to dot com space for a reason known to NIXI itself.

I draw  attention to your notice in which you have cited the clause 12 of the terms and conditions  as follows.

12. Reservation of Rights for the .IN Registry:

The .IN Registry reserves the right to instruct its Registry Services Provider to deny, cancel, transfer or otherwise make unavailable any registration that it deems necessary or place any domain name(s) on registry lock and/or put a domain name on hold in its discretion:

 (1) to protect the integrity and stability of .IN Registry;

(2) to comply with any applicable laws, Indian government rules or requirements, requests of law enforcement, in compliance with any dispute resolution process;

(3) to avoid any liability, civil or criminal, on the part of the .IN Registry, as well as its affiliates, subsidiaries, officers, directors, representatives and employees;

(4) for violations of this Agreement; or

(5) to correct mistakes made by the Registry or any Registrar in connection with a domain name registration. The Registry also reserves the right to freeze a domain name during resolution of a dispute pending before arbitrator(s) appointed under Registry ís Domain Name Dispute Resolution Policy and/or a court of competent jurisdiction.

I donot see any of the above five reasons justifying your action related to dpdpa.in. I demand that you present appropriate authenticated evidences before you take any penal action and prove them in a court of law  along with a copy of the specific Gazette Notification for acquiring the domain name property dpdpa.in from Ujvala Consultants Pvt Ltd.

It is a very unproductive exercise for professionals like us to pursue such disputes but in the interest of  ensuring that institutions of the Government function properly, it is also our duty. Hence with lot of discomfort and pain, I am sending this notice. This is similar to the Tax terrorism faced by honest citizens, undesirable but without an option.

I request NIXI to look at the larger implications of such acquisition of domain name,  the adverse effect it would have on future domain name registration in the dot in domain extension and the discredit you are bringing to Mr Modi’s Government  with this “Emergency Like” operation and withdraw this notice forthwith.

In the larger interest of the Indian Citizens, I will be placing this notice for public information along with your response, positive or negative.

Yours sincerely

Na.Vijayashankar

Managing Director

Unquote:

Posted in Cyber Law | Leave a comment

Explainability… DGPSI-AI Principle no 3.

We have discussed in the earlier articles two principles of DGPSI-AI a child framework of DGPSI, for compliance of DPDPA in AI systems, namely “Unknown Risk” and “Accountability”. We shall now extend our discussions to the third principle namely “Explainability”.

An AI takes the input and provides an output. But how it arrives at the output is a function of the algorithmic model, and the training process. Explainability is providing a clear and accessible reasons of why a certain decision output was generated. Lack of such explainability makes the AI a “Black Box”.

In the case of a “Black Box AI”, the entire accountability for the consequences of AI deployment rests with the licensor who clearly assumes the role of a Joint Data Fiduciary. DGPSI-AI expects that “Unknown Risk” principle itself defines the developer/licensor as a Data Fiduciary. If however any “exemption” be claimed or the data deployer wants to absorb the risk on behalf of the developer/licensor, the justification can be found only through the explainability feature of the AI.

Explainability also underscores “Transparency” and is supported by “Testing” and “Documentation” at the developer’s end whether they are shared with the deployer or supported by a third party assurance.

The objective of Explainability is to inject “Trust” on the algorithm’s functioning

Some of the real world examples of how explainability works are as follows.

Financial Services
In credit scoring and loan approvals, AI explainability helps financial institutions:
Show customers why their loan application was approved or denied
Identify which factors (income, credit history, employment status) most influenced the decision
Ensure compliance with fair lending regulations that require transparent decision-making
Healthcare
AI diagnostic tools use explainability to:
Highlight specific regions in medical images that led to a diagnosis
Rank the importance of different symptoms or test results
Provide confidence scores for diagnoses to help doctors make informed decisions
Human Resources
AI-powered recruitment systems demonstrate explainability by:
Showing which qualifications and experience factors influenced candidate scoring
Ensuring hiring decisions can be justified and are free from bias
Providing transparency to candidates about how their applications were evaluated
Criminal Justice
AI systems used for risk assessment must explain:
Which factors contribute to recidivism risk scores
How different variables are weighted in the decision process
Why certain interventions are recommended for specific individuals
Content Moderation
Social media platforms use explainable AI to:
Show users why their content was flagged or removed
Identify specific phrases or images that triggered moderation actions
Provide transparency in community guideline enforcement

Considering the wide utility of the Explainability and its direct relation to “Transparency” in the Data Protection Law where the deployer has to explain the processing to the data principals, this is considered as an important principle under DDGPSI-AI system

Naavi

Posted in Cyber Law | Leave a comment