Risk Management for DPDPA Compliance and Duty of a Data Processor

Abstract: This note tries to discuss why Privacy by design fails and compliance by design  is a better option to pursue, and why duties of a data processor needs to be recognized and what could be a suggested  “responsible Data Processor clause”

Privacy By Design Vs Compliance by Design

Since the days of GDPR, we have been using the terms “Privacy By Design” implying that the objective of GDPR is to protect “Privacy” of an individual.

However, “Privacy” is a concept  not fully defined. It is primarily the freedom of an individual to keep his  “State of Mind” free from external influence. This “State of Mind”  is dynamic and is not amenable to legislation unless at each step of interaction with an individual, a third party checks “Is this fine with your current state of mind?” , “How do I  recognize when you change your mind?” etc.

“Design” pre-supposes a base structure for “Privacy” and if “Privacy Risk” is indeterminable, “Privacy By design”  is not feasible except as a safety template.

Recognizing this fundamental hurdle, DGPSI framework adopted “Compliance by Design” as an objective. Being “Compliant” to the law ensures that the “Risk” associated with penalties is “Mitigated”.

Hence DPDPA Compliance is an exercise of “Non-Compliance Risk Mitigation” .

The Non-Compliance Risk may arise due  to Governance failures, Technology failures or Human  Failures. Governance can be defined, Technology can be designed and humans can be trained. These are the measures that can be considered as the basic level of Risk Mitigation.

However, human training has to be raised from the level of mere “Awareness” building to “Self Motivation to build a culture of Respect for Privacy”. Governance should be elevated from introducing lengthy policies to designing  practical, implementable procedures for management. Similarly the “Technology element” has  to take into account the  unpredictability of technology risks.

In the AI scenario, the “Unpredictable” AI-Risks are today creating a new challenge that is making “Compliance by Design” a challenge.

For some time since the rise of GUI based software like Windows,  Computerization has been growing in business circles with the belief that a user need not be a technology expert. Similarly use of AI based software should be free from the necessity of the user to know how the AI functions.

It is for this reason that DGPSI-AI believes that the fundamental  principles of AI Governance such as Bias prevention is like a bug to be fixed by the AI developers. It is the duty of the AI developer to ensure that AI does not generate false outputs either because the algorithm itself or  its learning data is faulty. The needs of “Explainability”, “Transparency”, “Accountability” are derivatives of the need for the AI to be free from error in its decision output.

Hence DGPSI-AI focusing on  “Compliance by Design” by the AI Deployer who is a Data Fiduciary expects all AI Risks to be absorbed by the  AI vendor/Licensor/Developer.

Currently, this is sought to be achieved  through the “Contractual Commitment”.

“Duty” is the essence of DPDPA

DPDPA is unique as a law since while balancing the Rights and Duties, it lays a strong emphasis on the principle of “Duties”. The Data Fiduciary is bound by a “Duty” not only as a Trustee of the Data Principal but also as a custodian of lawfulness of processing of personal data.

Section 4 of the Act specifying that a person may process personal data only for a lawful purpose extends to laws outside DPDPA. Though this applies to the definition of “Lawful purpose” which is limited to what is not expressly forbidden by law, the simultaneous operation of ITA 2000,  extends  the responsibility to “Prevention of harm to the society”. The “Due Diligence” aspect of the ITA 2000, lack of which will lead to the recognition of an “Offence” or “Civil Wrong” brings in a concept of “Duty to the society”.

The Data Principal himself is also expected to follow certain “Duties” under Section 15 of the DPDPA 2023 itself . One of the  duties cast upon the Data Principal under this section is to be compliant with the provisions of all applicable laws while exercising rights under DPDPA 2023.

Thus, both  the Data Fiduciary and the Data Principal have been imposed with the burden of a “Duty”. A “Joint Data Fiduciary” who in conjunction with another entity determines the purpose and means of processing, being a Data Fiduciary is also bound by the duty within the scope of his operation. The Consent Manager who is also a Data Fiduciary is bound by his duties.

In this context it has become necessary for us to raise a question on whether a “Data Processor” is also bound by some “Duty” of his own. There are two types of Data Processors whom we come across in the DPDPA scenario. They are the “Back End Data Processor” and the “Front End Data Processor”.

The “Front end data processor” interacts with the Data Principals on behalf of the Data Fiduciary. Most of them would be “Joint Data Fiduciaries” but it is technically feasible for them to be also “Data Principals”. The Back end data processors are “Undisclosed agents” (Unless the Privacy Notice specifically discloses their presence) who have influence on the outcome of processing but hide behind the Data Fiduciary. The “Duties” of these types of Data Processors need to be clarified for compliance purpose.

DPDPA 2023 does not seem to directly impose a “Duty” on the Data Processor. However, the definition of “Data Processor” is analogous to an “Agent” of a Data Fiduciary.

It is time we analyze the Indian Contract Act and study the law of agency in greater detail as to determine what are the liabilities of an agent (Data Processor) towards meeting the duties cast on the Principal.(Data Fiduciary).

The Agent has one set of duties to the Principal which is to “follow instructions of the principal as per the contract”, “To conduct his activities with reasonable  skill and diligence” etc.

At the same time the Principal has a duty to the Agent such as “Indemnifying the agent against lawful acts done in exercise of the authority done in good faith”.

When we look at the liability of an agent against a third party, the concept of “Disclosed” or “Undisclosed” agent takes effect. (Applicable for Front end data processor). In the case of an agent who has disclosed his representative role, the Principal (Data Fiduciary)  is liable to the third party (Data Principal). If the activity of the agent is as an “Undisclosed Agent”, the third party may have recourse to the agent also.

Such an agent can be the Data Processor or even the AI used by the Data Fiduciary or the Data Processor. This concept can be applied to the Data Processor when the processing contract involves an interaction with the third party.

When  an AI makes a decision and communicates to the data principal, the recourse of the data principal is against the Data Fiduciary as the disclosed party and also against the “Person who caused the AI system to behave in a particular manner” (Refer Section 11 of ITA 2000). Such a person is primarily the Data Fiduciary himself  or the Data Processor if the AI is used  by the Data Processor .

But if the AI usage is bound by a  software contract and the developer/Licensor of the AI has retained his own control over the code and functioning of the software, we may draw him into the liability chain.

Managing Risk Through a Model Contractual Clause

Hence the Data Fiduciary-Data Processor contract assumes importance to determine the liability of the Data Processor.

If the contract makes a statement such as

“The Data Processor shall be bound by the duties cast on the Data Fiduciary as per DPDPA read with Information Technology Act 2000 which includes processing of personal data in a lawful manner,  in compliance with all applicable laws and with due diligence and reasonable security practices. “

This clause can be called “Responsible Data Processor Clause” and is recommended for incorporation in all Data Processor contracts.

Since the Data Processor has the power to negotiate a contract where such responsibility may be  refused it is suggested that the MeitY through its recommendations re-iterate the link between the Indian Contract Act and the DPDPA and ensure that the Data Processors are not allowed to walk away without responsibility. Until such time, the above Responsible Data processor clause may be used in contracts.

(Comments welcome)

Naavi

Posted in Privacy | Leave a comment

Let DGPSI-AI guide the RBI and REs for FREE AI implementation

…Continued from the Previous post

The  Bhattacharyya committee report on FREE AI has a confusing name. First of all it is not a “Free” AI software as it may first imply. It is a framework for Responsible  and Ethical Enablement of Artificial Intelligence recommended by the Committee to RBI for its consideration in the Financial sector. RBI may consider it and decide how it can be adopted and actioned.

Basically this is a framework suggested for the RBI and inter-alia speaks about what the Government may do or the industry may do.

This is not to be considered as a “Framework of Compliance for AI deployers  or Developers”.

The REs who are the regulated entities under the RBI, consisting of Banks, NBFCs etc can watch out for RBI coming up with some binding guidelines following this report.

The Committee was constituted on December 6, 2024 with the following terms of reference.

i. To assess the current level of adoption of AI in financial services globally and in India.

ii. To review regulatory and supervisory approaches on AI with a focus on the financial sector globally.

iii. To identify potential risks associated with AI, if any, and recommend an evaluation, mitigation and monitoring framework and consequent compliance requirements for financial institutions, including banks, NBFCs, FinTechs, PSOs, etc.

iv. To recommend a framework including governance aspects for responsible, ethical adoption of AI models/ applications in the Indian financial sector.

v. Any other matter related to AI in the Indian financial sector

The Committee has come up with its recommendations based on Seven Sutras and 26 recommendations.

We have tried to summarize the contents of the report in the earlier articles ending with “Observations on the FREE AI Committee Report”

This framework is  not like the DGPSI-AI framework which is a framework for DPDPA Compliance by the Data Fiduciaries. It is also not a law on AI. It contains some provisions which have been listed  in our previous article suggesting 13 action points for the REs .

If any of the REs want to start acting on the implementation of the report at this stage without waiting for the RBI to issue its acceptance  circular, they can start with the development  of an AI policy at the Board level.

One of the action points suggest

AI System Governance Framework: 

REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning.

Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage.

REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.

In the above recommendation, the first two paragraphs relate to the AI developers and only the third para refers to the AI deployers. DGPSI-AI is the relevant framework for the REs for meeting this recommendation.

The framework makes reference to “Institutional product approval framework” which needs to be developed. REs need to also ensure Board approved consumer protection framework that prioritizes on transparency, fairness and accessible recourse mechanisms for customers. The DGPSI-AI addresses these requirements.

REs must also identify potential risks arising out their use of AI  which also is covered by DGPSI-AI.

The recommendations include creation of AI inventory and AI audit framework with internal audits, third party audits and periodical review. DGPSI-AI addresses these requirements.

The requirements also recommend that REs should include AI related disclosures in their annual reports and websites.

The DGPSI-AI audit would be the right tool to meet some of these requirements.

Though the DGPSI-AI framework of DPDPA compliance was developed independently without the knowledge of what was brewing in the committee, it was a coincidence that the DGPSI-AI framework was published right before the FREE AI Report was published.

I hope RBI will study DGPSI-AI framework and use it as part of the guidelines to RE.

Naavi

Posted in Privacy | Leave a comment

Quantum activities in India

It was a pleasant surprise today to find out that a few private sector companies in India have already made a breakthrough in Quantum computing. Heard Mr Nagendra Nagaraja of Bengaluru and his company qpiai.tech

It was also good to note that the company is  focused  on being a “Product  Company” and also support SME/MSMEs.

One of the objectives of the Indian Quantum Mission is to develop intermediate scale quantum computers with 50-100 physical qubits in 5 years and 50-1000 physical qubits in 8 years. It was nice to hear that Mr Nagaraj with his team has already developed a 25 qubit system and is planning to reach the 1000 qubit system target by 2029-2030.

The other objects of the quantum mission are

  • Establish satellite-based secure quantum communications over 2000 kilometers within India

  • Create inter-city quantum key distribution networks spanning 2000 km

  • Develop quantum sensors including magnetometers and atomic clocks for precision applications

The Government has also announced four thematic hubs for quantum research with IISC, Bengaluru being one of them along with IITs in Delhi, Mumbai and Chennai. DRDO is also collaborating with TCS and TIFR for the development of indigenous quantum processors. HCL and Tech Mahindra are also working on developing quantum software and algorithms.

Apart from the Quantum Research Park  and nearly 15 start ups in Bengaluru, a large Quantum tech park is envisaged in Amaravati, Andhra Pradesh.

The integration of Quantum and AI technologies may open opportunities in Quantum Machine Learning for enhanced pattern recognition, Accelerated ML model training  and advanced optimization algorithms.

Hopefully, India would be making huge strides in the field to catch up with countries like US and China in the near future.

We wish all the innovative entrepreneurs who are working in the Quantum plus AI field a grand success.

Naavi

PS: while trying to browse qpiai.tech, donot be confused with similar looking domain names such as qpai.tech. Wish both these domains  put up the “Lookalikes” disclosure.

Posted in Privacy | Leave a comment

Happy Independence Day 2025 to all

Posted in Privacy | Leave a comment

Disclosure and Assurance document under DGPSI-AI

DGPSI-AI is a Pioneering and Forward thinking  framework which establishes India as a leader in AI deployment is an assessment of the leading LLMs such as ChatGPT, Gemini, Perplexity and DeepSeek.

It takes time for the Indian AI eco system to understand the reason why these LLMs place DGPSI-AI in high esteem. As we proceed to explain the DGPSI-AI in greater detail the reasons would present by themselves.

One of the implementation specifications envisaged by the DGPSI-AI framework is the classification of a software as “AI” and “Non -AI” through documentation.

 DGPSI starts with a classification of Data as “Non Personal” and “Personal” and further  “Personal Data” itself is classified as  “Covered under  DPDPA” and “Covered under other country laws”. Similarly, before DGPSI-AI implementation starts, it is necessary to classify the software as “AI” and “Non AI”. This also means that there has to be a “Process Inventory” and “Software Inventory” which are pre-requisites for identification of “AI-Process”.

In this process, it is intended that the Data Fiduciary who purchases a software which is branded as “AI Embedded” or “AI-Inside”, shall insist that the licensor incorporates a “Disclosure and Assurance” to the following effect.

“Original Code of this software developed by …………………… is capable/Not capable of modifying its code without human intervention from the outputs generated and has been tested and assured as safe for personal data processing for DPDPA Compliance”.

This declaration identifies the original accountability of the AI software (which is a requirement under ITA 2000 compliance) and incorporates the first requirement to identify the software as AI.

This may be one of the mandatory contract clauses recommended to be  used in every software supply contract.

I request the readers to add their comment on the feasibility and desirability of such a clause and whether it can be voluntarily adopted or requires a mandate from the Government. I look forward to your views.

Naavi

Posted in Privacy | Leave a comment

Observations on the FREE AI Committee Report

Continued from earlier posts:

The FREE AI report of the of Dr Pushpak Bhattacharyya has submitted a report to RBI consisting of 26 recommendations.

For these 26 recommendations , action and time line responsibilities have also been assigned. Twelve of the actions ( 1, 2, 3, 4, 5, 6,7, 8, 9 , 11, 13 and 23) are indicated as responsibilities of Regulators and Government. Industry and SRO s are indicated as responsible for some of the actions. (4,12,13* and 14) .

13 action points (10, 14,15,16,17,18,19,20,21,22,23,24 and 25) are attributed to REs and they are listed below. These REs are the Data Fiduciaries to whom DGPSI-AI is applicable.

These requirements can be summarised below.

 

No Requirement
10 Capacity Building within REs: REs should develop AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption.
14 Board Approved AI Policy: To ensure the safe and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template.
15 Data Lifecycle Governance: REs must establish robust data  governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle.
16 AI System Governance Framework: REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.
17 Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks.
18 Consumer Protection: REs     should establish a board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights.
19 Cybersecurity     Measures:     REs must identify     potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms.
20 Red Teaming: REs should establish structured red teaming  processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes.
21 Business Continuity Plan for AI Systems: REs     must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills.
22 AI Incident Reporting and Sectoral Risk Intelligence  Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure.
23 AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymization of entity details.
24

AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs.

a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI application

b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken.

c. Periodic Review: The overall audit framework should be reviewed and updated at least biennially to incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance.

25

Disclosures     by     REs:     REs should     include     AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions.

Readers may kindly map DGPSI-AI with this list. At first glance DGPSI-AI seems to cover all these aspects.

Continued….

Naavi

Posted in Privacy | Leave a comment