Explainability… DGPSI-AI Principle no 3.

We have discussed in the earlier articles two principles of DGPSI-AI a child framework of DGPSI, for compliance of DPDPA in AI systems, namely “Unknown Risk” and “Accountability”. We shall now extend our discussions to the third principle namely “Explainability”.

An AI takes the input and provides an output. But how it arrives at the output is a function of the algorithmic model, and the training process. Explainability is providing a clear and accessible reasons of why a certain decision output was generated. Lack of such explainability makes the AI a “Black Box”.

In the case of a “Black Box AI”, the entire accountability for the consequences of AI deployment rests with the licensor who clearly assumes the role of a Joint Data Fiduciary. DGPSI-AI expects that “Unknown Risk” principle itself defines the developer/licensor as a Data Fiduciary. If however any “exemption” be claimed or the data deployer wants to absorb the risk on behalf of the developer/licensor, the justification can be found only through the explainability feature of the AI.

Explainability also underscores “Transparency” and is supported by “Testing” and “Documentation” at the developer’s end whether they are shared with the deployer or supported by a third party assurance.

The objective of Explainability is to inject “Trust” on the algorithm’s functioning

Some of the real world examples of how explainability works are as follows.

Financial Services
In credit scoring and loan approvals, AI explainability helps financial institutions:
Show customers why their loan application was approved or denied
Identify which factors (income, credit history, employment status) most influenced the decision
Ensure compliance with fair lending regulations that require transparent decision-making
Healthcare
AI diagnostic tools use explainability to:
Highlight specific regions in medical images that led to a diagnosis
Rank the importance of different symptoms or test results
Provide confidence scores for diagnoses to help doctors make informed decisions
Human Resources
AI-powered recruitment systems demonstrate explainability by:
Showing which qualifications and experience factors influenced candidate scoring
Ensuring hiring decisions can be justified and are free from bias
Providing transparency to candidates about how their applications were evaluated
Criminal Justice
AI systems used for risk assessment must explain:
Which factors contribute to recidivism risk scores
How different variables are weighted in the decision process
Why certain interventions are recommended for specific individuals
Content Moderation
Social media platforms use explainable AI to:
Show users why their content was flagged or removed
Identify specific phrases or images that triggered moderation actions
Provide transparency in community guideline enforcement

Considering the wide utility of the Explainability and its direct relation to “Transparency” in the Data Protection Law where the deployer has to explain the processing to the data principals, this is considered as an important principle under DDGPSI-AI system

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.