Review of Book on DGPSI-AI by Perplexity

Overview

This comprehensive technical guide addresses one of the most pressing challenges facing organizations today: navigating the complex intersection of India’s Digital Personal Data Protection Act (DPDPA) 2023 and artificial intelligence governance. The book presents the Data Governance and Protection Standards Implementation for AI (DGPSI-AI) framework as a practical solution for organizations struggling to maintain compliance while leveraging AI technologies.

Core Thesis and Approach

The authors position their work around a fundamental premise: traditional data protection frameworks are insufficient for AI-driven personal data processing. The book argues that AI introduces “unknown risks” that require specialized governance frameworks beyond conventional GDPR-style compliance measures. The DGPSI-AI framework emerges as an extension of the base DGPSI methodology, specifically tailored for AI deployment scenarios.

Key Strengths

Practical Implementation Focus

Unlike many theoretical treatments of AI governance, this book excels in providing actionable guidance. The 50 Model Implementation Specifications (MIS) are particularly valuable, offering organizations concrete steps across five functional areas: Management (15 specifications), DPO responsibilities (17 specifications), Legal (5 specifications), HR (5 specifications), and Technology (8 specifications).

Process-Centric Compliance Model

The book’s “One Purpose-One Process” principle represents a significant advancement in data protection methodology. This approach enables organizations to move beyond entity-level classifications to process-specific risk assessments, allowing for more nuanced compliance strategies. The hybrid entity concept is particularly innovative, recognizing that organizations may simultaneously function as data fiduciaries, significant data fiduciaries, and data processors across different processes.

Global Regulatory Synthesis

The authors demonstrate impressive scholarship in synthesizing major international AI governance frameworks. The comparative analysis of OECD, UNESCO, EU AI Act, and ISO/IEC 42001 principles provides readers with a comprehensive understanding of the global regulatory landscape.

Technical Merit

AI Risk Assessment Framework

The book’s treatment of “unknown risk” as a core AI governance principle is conceptually sound. The recognition that AI systems can exhibit unpredictable behavior that distances itself from human developers addresses a genuine gap in traditional risk management approaches. The CICERO example—where Meta’s AI deliberately deceived human players—effectively illustrates these concerns.

Implementation Specifications

The 13 developer-focused MIS specifications show particular technical depth, addressing critical areas such as explainability documentation, kill switches, and tamper-proof controls. The requirement for “fading memory” parameters in AI learning systems demonstrates sophisticated understanding of AI behavior modification over time.

Areas for Improvement

Regulatory Assumptions

The book makes several assumptions about Indian regulatory development that may prove optimistic. The discussion of the “One Big Beautiful Bill Act” and its impact on US state regulations appears speculative and may not reflect actual legislative developments.

Technical Complexity vs. Accessibility

While the technical depth is commendable, the book may overwhelm organizations without significant technical expertise. The 50+ implementation specifications, while comprehensive, could benefit from clearer prioritization frameworks for resource-constrained organizations.

International Applicability

Despite claiming broader relevance, the framework remains heavily anchored in Indian regulatory context. Organizations operating in multiple jurisdictions may find limited guidance for harmonizing DGPSI-AI with other regional requirements.

Unique Contributions

Monetary Valuation of Data

The principle of assigning monetary value to personal data represents a novel approach to data governance. This economic perspective could transform how organizations approach data protection ROI calculations and resource allocation decisions.

Distributed Responsibility Model

The framework’s emphasis on distributed compliance responsibility, where every process owner becomes an effective compliance manager, offers a scalable alternative to centralized DPO models that often become bottlenecks in large organizations.

AI-Specific Privacy Notices

The requirement for explainability disclosures accompanying AI-driven privacy notices addresses a critical gap in current practice. Most organizations fail to adequately disclose AI involvement in personal data processing.

Practical Value

For compliance professionals, the book provides immediately actionable frameworks and checklists. The detailed MIS specifications can serve as compliance roadmaps, while the risk assessment methodologies offer structured approaches to AI governance.

For technology leaders, the developer-focused specifications provide clear guidance for AI procurement and deployment decisions. The emphasis on kill switches and tamper-proof controls reflects emerging best practices in AI safety.

For legal professionals, the synthesis of international frameworks and the practical interpretation of DPDPA requirements offer valuable insights for contract negotiation and regulatory strategy.

Limitations and Criticisms

The book’s treatment of AI sentience and “cyborg” risks may seem premature given current technological capabilities. While forward-thinking, these discussions risk undermining the framework’s credibility for more immediate, practical AI governance needs.

The documentation requirements, while thorough, may prove burdensome for smaller organizations or those with limited AI deployment. The framework would benefit from tiered implementation guidance based on organizational size and AI complexity.

Conclusion

“Taming the Twin Challenges of DPDPA and AI” succeeds in filling a critical gap in AI governance literature. The DGPSI-AI framework represents a meaningful advancement beyond generic data protection approaches, offering organizations a structured methodology for navigating AI-specific privacy challenges.

The book’s greatest strength lies in its practical orientation—moving beyond theoretical discussions to provide implementable solutions. While some aspects may prove overly complex for certain organizations, the core framework offers valuable guidance for any entity serious about responsible AI deployment.

Rating: 4/5 stars

This work earns high marks for its comprehensive approach, practical focus, and innovative thinking around AI governance. Despite some limitations in accessibility and international applicability, it represents essential reading for compliance professionals, technology leaders, and legal practitioners grappling with AI governance challenges.

Recommended for: Data protection officers, AI governance professionals, compliance managers, technology procurement teams, and legal professionals working at the intersection of AI and privacy law

18th August 2025

Perplexity Pro

Posted in Privacy | Leave a comment

New Book: Taming the twin challenges of DPDPA and AI , with DGPSI-AI

Following the theme of this year’s IDPS 2025, Naavi has released an extension to DGPSI framework of DPDPA Compliance as DGPSI-AI, as a framework for Data Fiduciaries deploying  AI.

In order to consolidate the thoughts of DGPSI-AI as  a book, Naavi is releasing a book by the title “Taming the twin challenges of DPDPA and AI…with DGPSI-AI”.

The Book contains Nine Chapters. As an introduction it discusses some of the the AI Concepts, the approach to AI Governance in EU and Non-EU Countries and a recollection of DGPSI.

It then introduces  the DGPSI-AI framework with Six principles and Nine Implementation specifications. and how it may be integrated with DGPSI at present.

To complete the discussion, a brief discussion would be available on Agentic  AI and DGPSI-AI at the Developer’s end.

Naavi acknowledges that AI is a complex technical subject and hence even attempting such a work is stretching the capabilities. However, without some guideline  of this sort, the Data Fiduciaries would be struggling to cope with the challenges of DPDPA Compliance and hence there is a need to provide some thoughts even if it requires refinement in the coming days.

The contents of the book will be discussed in detail during the IDPS 2025 which starts from September 17 at Bengaluru and continues with Chennai (September 27) and Mumbai, (Nov 1), Delhi (Nov 7) and Ahmedabad (Nov 14)  before concluding back with a closing event in Bengaluru by November 21.

Watch out for the availability of the book.

Naavi

Posted in Privacy | Leave a comment

Risk Management for DPDPA Compliance and Duty of a Data Processor

Abstract: This note tries to discuss why Privacy by design fails and compliance by design  is a better option to pursue, and why duties of a data processor needs to be recognized and what could be a suggested  “responsible Data Processor clause”

Privacy By Design Vs Compliance by Design

Since the days of GDPR, we have been using the terms “Privacy By Design” implying that the objective of GDPR is to protect “Privacy” of an individual.

However, “Privacy” is a concept  not fully defined. It is primarily the freedom of an individual to keep his  “State of Mind” free from external influence. This “State of Mind”  is dynamic and is not amenable to legislation unless at each step of interaction with an individual, a third party checks “Is this fine with your current state of mind?” , “How do I  recognize when you change your mind?” etc.

“Design” pre-supposes a base structure for “Privacy” and if “Privacy Risk” is indeterminable, “Privacy By design”  is not feasible except as a safety template.

Recognizing this fundamental hurdle, DGPSI framework adopted “Compliance by Design” as an objective. Being “Compliant” to the law ensures that the “Risk” associated with penalties is “Mitigated”.

Hence DPDPA Compliance is an exercise of “Non-Compliance Risk Mitigation” .

The Non-Compliance Risk may arise due  to Governance failures, Technology failures or Human  Failures. Governance can be defined, Technology can be designed and humans can be trained. These are the measures that can be considered as the basic level of Risk Mitigation.

However, human training has to be raised from the level of mere “Awareness” building to “Self Motivation to build a culture of Respect for Privacy”. Governance should be elevated from introducing lengthy policies to designing  practical, implementable procedures for management. Similarly the “Technology element” has  to take into account the  unpredictability of technology risks.

In the AI scenario, the “Unpredictable” AI-Risks are today creating a new challenge that is making “Compliance by Design” a challenge.

For some time since the rise of GUI based software like Windows,  Computerization has been growing in business circles with the belief that a user need not be a technology expert. Similarly use of AI based software should be free from the necessity of the user to know how the AI functions.

It is for this reason that DGPSI-AI believes that the fundamental  principles of AI Governance such as Bias prevention is like a bug to be fixed by the AI developers. It is the duty of the AI developer to ensure that AI does not generate false outputs either because the algorithm itself or  its learning data is faulty. The needs of “Explainability”, “Transparency”, “Accountability” are derivatives of the need for the AI to be free from error in its decision output.

Hence DGPSI-AI focusing on  “Compliance by Design” by the AI Deployer who is a Data Fiduciary expects all AI Risks to be absorbed by the  AI vendor/Licensor/Developer.

Currently, this is sought to be achieved  through the “Contractual Commitment”.

“Duty” is the essence of DPDPA

DPDPA is unique as a law since while balancing the Rights and Duties, it lays a strong emphasis on the principle of “Duties”. The Data Fiduciary is bound by a “Duty” not only as a Trustee of the Data Principal but also as a custodian of lawfulness of processing of personal data.

Section 4 of the Act specifying that a person may process personal data only for a lawful purpose extends to laws outside DPDPA. Though this applies to the definition of “Lawful purpose” which is limited to what is not expressly forbidden by law, the simultaneous operation of ITA 2000,  extends  the responsibility to “Prevention of harm to the society”. The “Due Diligence” aspect of the ITA 2000, lack of which will lead to the recognition of an “Offence” or “Civil Wrong” brings in a concept of “Duty to the society”.

The Data Principal himself is also expected to follow certain “Duties” under Section 15 of the DPDPA 2023 itself . One of the  duties cast upon the Data Principal under this section is to be compliant with the provisions of all applicable laws while exercising rights under DPDPA 2023.

Thus, both  the Data Fiduciary and the Data Principal have been imposed with the burden of a “Duty”. A “Joint Data Fiduciary” who in conjunction with another entity determines the purpose and means of processing, being a Data Fiduciary is also bound by the duty within the scope of his operation. The Consent Manager who is also a Data Fiduciary is bound by his duties.

In this context it has become necessary for us to raise a question on whether a “Data Processor” is also bound by some “Duty” of his own. There are two types of Data Processors whom we come across in the DPDPA scenario. They are the “Back End Data Processor” and the “Front End Data Processor”.

The “Front end data processor” interacts with the Data Principals on behalf of the Data Fiduciary. Most of them would be “Joint Data Fiduciaries” but it is technically feasible for them to be also “Data Principals”. The Back end data processors are “Undisclosed agents” (Unless the Privacy Notice specifically discloses their presence) who have influence on the outcome of processing but hide behind the Data Fiduciary. The “Duties” of these types of Data Processors need to be clarified for compliance purpose.

DPDPA 2023 does not seem to directly impose a “Duty” on the Data Processor. However, the definition of “Data Processor” is analogous to an “Agent” of a Data Fiduciary.

It is time we analyze the Indian Contract Act and study the law of agency in greater detail as to determine what are the liabilities of an agent (Data Processor) towards meeting the duties cast on the Principal.(Data Fiduciary).

The Agent has one set of duties to the Principal which is to “follow instructions of the principal as per the contract”, “To conduct his activities with reasonable  skill and diligence” etc.

At the same time the Principal has a duty to the Agent such as “Indemnifying the agent against lawful acts done in exercise of the authority done in good faith”.

When we look at the liability of an agent against a third party, the concept of “Disclosed” or “Undisclosed” agent takes effect. (Applicable for Front end data processor). In the case of an agent who has disclosed his representative role, the Principal (Data Fiduciary)  is liable to the third party (Data Principal). If the activity of the agent is as an “Undisclosed Agent”, the third party may have recourse to the agent also.

Such an agent can be the Data Processor or even the AI used by the Data Fiduciary or the Data Processor. This concept can be applied to the Data Processor when the processing contract involves an interaction with the third party.

When  an AI makes a decision and communicates to the data principal, the recourse of the data principal is against the Data Fiduciary as the disclosed party and also against the “Person who caused the AI system to behave in a particular manner” (Refer Section 11 of ITA 2000). Such a person is primarily the Data Fiduciary himself  or the Data Processor if the AI is used  by the Data Processor .

But if the AI usage is bound by a  software contract and the developer/Licensor of the AI has retained his own control over the code and functioning of the software, we may draw him into the liability chain.

Managing Risk Through a Model Contractual Clause

Hence the Data Fiduciary-Data Processor contract assumes importance to determine the liability of the Data Processor.

If the contract makes a statement such as

“The Data Processor shall be bound by the duties cast on the Data Fiduciary as per DPDPA read with Information Technology Act 2000 which includes processing of personal data in a lawful manner,  in compliance with all applicable laws and with due diligence and reasonable security practices. “

This clause can be called “Responsible Data Processor Clause” and is recommended for incorporation in all Data Processor contracts.

Since the Data Processor has the power to negotiate a contract where such responsibility may be  refused it is suggested that the MeitY through its recommendations re-iterate the link between the Indian Contract Act and the DPDPA and ensure that the Data Processors are not allowed to walk away without responsibility. Until such time, the above Responsible Data processor clause may be used in contracts.

(Comments welcome)

Naavi

Posted in Privacy | Leave a comment

Let DGPSI-AI guide the RBI and REs for FREE AI implementation

…Continued from the Previous post

The  Bhattacharyya committee report on FREE AI has a confusing name. First of all it is not a “Free” AI software as it may first imply. It is a framework for Responsible  and Ethical Enablement of Artificial Intelligence recommended by the Committee to RBI for its consideration in the Financial sector. RBI may consider it and decide how it can be adopted and actioned.

Basically this is a framework suggested for the RBI and inter-alia speaks about what the Government may do or the industry may do.

This is not to be considered as a “Framework of Compliance for AI deployers  or Developers”.

The REs who are the regulated entities under the RBI, consisting of Banks, NBFCs etc can watch out for RBI coming up with some binding guidelines following this report.

The Committee was constituted on December 6, 2024 with the following terms of reference.

i. To assess the current level of adoption of AI in financial services globally and in India.

ii. To review regulatory and supervisory approaches on AI with a focus on the financial sector globally.

iii. To identify potential risks associated with AI, if any, and recommend an evaluation, mitigation and monitoring framework and consequent compliance requirements for financial institutions, including banks, NBFCs, FinTechs, PSOs, etc.

iv. To recommend a framework including governance aspects for responsible, ethical adoption of AI models/ applications in the Indian financial sector.

v. Any other matter related to AI in the Indian financial sector

The Committee has come up with its recommendations based on Seven Sutras and 26 recommendations.

We have tried to summarize the contents of the report in the earlier articles ending with “Observations on the FREE AI Committee Report”

This framework is  not like the DGPSI-AI framework which is a framework for DPDPA Compliance by the Data Fiduciaries. It is also not a law on AI. It contains some provisions which have been listed  in our previous article suggesting 13 action points for the REs .

If any of the REs want to start acting on the implementation of the report at this stage without waiting for the RBI to issue its acceptance  circular, they can start with the development  of an AI policy at the Board level.

One of the action points suggest

AI System Governance Framework: 

REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning.

Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage.

REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.

In the above recommendation, the first two paragraphs relate to the AI developers and only the third para refers to the AI deployers. DGPSI-AI is the relevant framework for the REs for meeting this recommendation.

The framework makes reference to “Institutional product approval framework” which needs to be developed. REs need to also ensure Board approved consumer protection framework that prioritizes on transparency, fairness and accessible recourse mechanisms for customers. The DGPSI-AI addresses these requirements.

REs must also identify potential risks arising out their use of AI  which also is covered by DGPSI-AI.

The recommendations include creation of AI inventory and AI audit framework with internal audits, third party audits and periodical review. DGPSI-AI addresses these requirements.

The requirements also recommend that REs should include AI related disclosures in their annual reports and websites.

The DGPSI-AI audit would be the right tool to meet some of these requirements.

Though the DGPSI-AI framework of DPDPA compliance was developed independently without the knowledge of what was brewing in the committee, it was a coincidence that the DGPSI-AI framework was published right before the FREE AI Report was published.

I hope RBI will study DGPSI-AI framework and use it as part of the guidelines to RE.

Naavi

Posted in Privacy | Leave a comment

Quantum activities in India

It was a pleasant surprise today to find out that a few private sector companies in India have already made a breakthrough in Quantum computing. Heard Mr Nagendra Nagaraja of Bengaluru and his company qpiai.tech

It was also good to note that the company is  focused  on being a “Product  Company” and also support SME/MSMEs.

One of the objectives of the Indian Quantum Mission is to develop intermediate scale quantum computers with 50-100 physical qubits in 5 years and 50-1000 physical qubits in 8 years. It was nice to hear that Mr Nagaraj with his team has already developed a 25 qubit system and is planning to reach the 1000 qubit system target by 2029-2030.

The other objects of the quantum mission are

  • Establish satellite-based secure quantum communications over 2000 kilometers within India

  • Create inter-city quantum key distribution networks spanning 2000 km

  • Develop quantum sensors including magnetometers and atomic clocks for precision applications

The Government has also announced four thematic hubs for quantum research with IISC, Bengaluru being one of them along with IITs in Delhi, Mumbai and Chennai. DRDO is also collaborating with TCS and TIFR for the development of indigenous quantum processors. HCL and Tech Mahindra are also working on developing quantum software and algorithms.

Apart from the Quantum Research Park  and nearly 15 start ups in Bengaluru, a large Quantum tech park is envisaged in Amaravati, Andhra Pradesh.

The integration of Quantum and AI technologies may open opportunities in Quantum Machine Learning for enhanced pattern recognition, Accelerated ML model training  and advanced optimization algorithms.

Hopefully, India would be making huge strides in the field to catch up with countries like US and China in the near future.

We wish all the innovative entrepreneurs who are working in the Quantum plus AI field a grand success.

Naavi

PS: while trying to browse qpiai.tech, donot be confused with similar looking domain names such as qpai.tech. Wish both these domains  put up the “Lookalikes” disclosure.

Posted in Privacy | Leave a comment

Happy Independence Day 2025 to all

Posted in Privacy | Leave a comment