DGPSI-AI Principle-6: Ethics

The first principle we hear whenever we speak about AI Governance principles is “Ethical and Responsible AI”.

We have explored different dimensions of Responsible AI in the form of assuming risk responsibility, accountability, explainability. “Ethics” become relevant when there is no specific law to follow. For some more time, India will have to work without a specific law on AI and has to manage with  a Jurisprudential outlook on some principles of ITA 2000 and DPDPA.

Hence “Ethical Approach” which goes beyond the written law and addresses what is good for the society is relevant for India. When we alluded to “Neuro Manipulation Prevention” as an objective of “Security of AI” in the previous article.

The “Ethical Principle” basically urges an organization to go beyond the written law and address other issues of the society. The definition of “”Fiduciary” under DPDPA requires an entity to assume a “Duty” to manage the Personal data in such a manner that it secures the interest of the data principal. Hence “Ethics” is already embedded in the DGPSI-Full /Lite principles. The extension to the AI Process is therefore automatic.

The “Ethical” requirements can only be identified through a “Risk Assessment Process” where some risks may appear stretching the law a little far. Hence when an auditor prepares a “Gap Assessment” based on the Model Implementation Specifications and the auditee data fiduciary absorbs certain risks and creates an “Adapted Implementation Specification” for the auditor to probe with evidence through a “Deviation Justification Document”, the difference between what is “Ethical” and what is “Statutorily Mandatory” is flagged.

Similarly when an assessment is made on an AI, the first Gap Assessment may follow the principle of ethics with “utmost care”. Subsequently  a “Deviation Justification Document for AI Deployment” may be prepared to adapt the model implementation specifications of  DGPSI AI with modifications guided by the “Risk Absorption” decisions of the management.

The “Post Market Monitoring” referred to in the EU AI Act is a principle that can be used by Data Fiduciaries  following ethical considerations when they can monitor the impact of the AI on the Data Principals even after the purpose of processing is deemed to be complete which may require a review of the data retention.  Hence when an AI is allowed to store personal data after processing, the data fiduciary shall monitor the requirement  of  data retention at periodical intervals and purge them when the need no longer  exists.

We have earlier discussed the concept of  “Data Fading” principles  for a developer. This  can be  another “Ethical Requirement” that can be  adopted when the AI deployer continues the training of the model with internal data . Alternatively, at the end of each user session a question can be posed

“Can the learnings of this session be retained for future use or deleted”.

This would treat the immediate processing as a different process compared to the “Storing” and “Re-use of the stored data” as different purposes for which a new consent is used. This can be done periodically or at the end of every process cycle.

It appears that  the six principles of DGPSI-AI could suffice to cover all the AI Governance principles covered under OECD principles, UNESCO Principles, EU-AI act Principles  as well as the Australian “Model Contractual Clauses principles”. We can continue to explore the Model Implementation specifications under these principles to complete the development  of DGPSI-AI framework.

Naavi

(P.S: Readers may appreciate that the concepts of DGPSI-AI are under development  and requirements of refinements are recognized. Your comments will help us in the process)

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Privacy. Bookmark the permalink.