Chat GPT Reviews Naavi’s book on DGPSI-AI

Book Review: Taming the Twin Challenges of DPDPA and AI

Overview and Context: Taming the Twin Challenges of DPDPA and AI with DGPSI-AI is the latest work by Vijayashankar Na (Naavi), building on his earlier DPDPA compliance handbooks. Published in August 2025, it addresses the twin challenges of India’s new Digital Personal Data Protection Act, 2023 (DPDPA) and the rise of AI. The book is framed as an extension of Naavi’s DGPSI (Digital Governance and Protection System of India) compliance framework, introducing a DGPSI-AI module for AI-driven data processing. The author situates the work for data fiduciaries (“DPDPA deployers”) facing the DPDPA’s steep penalties (up to ₹250 crore) and “AI-fuelled” risks. In tone and organization it is thorough: the Preface and Introduction review DPDPA basics and AI trends, followed by chapters on global AI governance principles (EU, OECD, UNESCO), comparative regulatory approaches (US states, Australia), and then the DGPSI-AI framework itself. While Naavi acknowledges the complexity of AI for lay readers, his goal is clear: to equip Indian compliance professionals and technologists with practical guidelines for the AI era.

Clarity of AI Concepts

The book devotes an entire chapter to demystifying AI for non-technical readers. Naavi explains key terms (algorithms, models, generative AI, agentic AI) in accessible language. For example, he describes generative AI (e.g. GPT) as models trained on large datasets to predict and generate text, and agentic AI as systems that “plan how to proceed with a task” and adapt their outputs dynamically. This pragmatic framing helps the intended audience (lawyers, compliance officers) understand novel terms. The writing is generally clear: e.g., the book notes that most users became aware of AI through ChatGPT-style tools, and it uses everyday analogies (using Windows or Word without knowing internals) to justify a non-technical approach. In this way it succeeds in making AI concepts understandable. However, the text sometimes oversimplifies or blurs technical distinctions. The author even admits that purists may find some terms used interchangeably (e.g. “algorithm vs model”). Similarly, speculative ideas (such as Naavi’s own “hypnotism of AI” theory) are introduced without deep technical backing. While this keeps the narrative flowing for general readers, technically minded readers might crave more rigor. Overall, the discussion of AI is approachable and fairly accurate: it correctly identifies trends like multi-modal generative AI, integration into browsers (e.g. Google Gemini, Edge Copilot), and the spectrum of AI systems (from narrow AI to hypothetical “Theory of Mind” agents). The inclusion of Agentic AI is particularly innovative: Naavi defines it as a goal-driven AI with its own planning loop, echoing industry descriptions of agentic systems as autonomous, goal-directed AI. This foresight – addressing agentic AI before many mainstream works – is a strength in making the book future-facing.

Analysis of DPDPA and DGPSI Context

Legally, the book is deeply rooted in India’s DPDPA framework. It repeatedly emphasizes the novel data fiduciary concept (absent in GDPR) whereby organizations owe a trustee-like duty to individuals. The author correctly notes that DPDPA’s core purpose is to protect the fundamental right to privacy while allowing lawful data processing, and he cites this as a guiding principle (mirroring the Act’s long title). The text accurately reflects DPDPA obligations: for instance, it stresses that any AI system handling personal data invokes fiduciary duties and may require explicit consent or legal basis under the Act. Naavi also highlights the Act’s severe penalty regime (up to ₹250 crore for breaches), underscoring the high stakes. The book’s discussion of fiduciary duty is sophisticated: it observes that a data fiduciary “has to follow an ethical framework” beyond the statute’s words. This aligns with legal commentary that DPDPA imposes broad accountability on controllers (data fiduciaries).

Practically, the book guides readers through DPDPA compliance steps. Chapter 5 details risk assessment for AI deployments: Naavi insists that any deployment of “AI-driven software” by a fiduciary must start with a Data Protection Impact Assessment (DPIA). This reflects DPDPA Section 33’s DPIA requirement (analogous to GDPR’s DPIA). He also explains that under India’s Information Technology Act, 2000 an AI output is legally attributed to its human “originator”, so companies cannot blame the AI itself. These legal explanations are mostly accurate and firmly tied to Indian law (e.g. citing ITA §11 and §85). In sum, the book treats DPDPA context with confidence and detail, though it sometimes reads more like an advocacy piece for DGPSI than an impartial analysis. For example, the text assumes DGPSI (and DGPSI-AI) are the “perfect prescription” and often interprets DPDPA provisions through that lens. But as a compliance roadmap it does cover the essentials: fiduciary duty, consent renewal for legacy data, DPIAs, data audits and DPO roles are all emphasized.

The DGPSI-AI Framework

The center piece of the book is the DGPSI-AI framework, Naavi’s proposal for AI governance under DPDPA. It is explicitly designed as a “concise” extension to the existing DGPSI system: just six principles and nine implementation specifications (MIS) in total. This economy is intentional (“not to make compliance a burden”) and is a pragmatic strength. The six core principles (summarized as “UAE‑RSE” – Unknown risk, Accountability, Explainability, Responsibility, Security, Ethics) are spelled out with concrete measures. For example, under the Unknown Risk principle, Naavi argues that any autonomous AI should be treated by default as high-risk, automatically classifying the deployer as a “Significant Data Fiduciary” requiring DPIAs, a DPO, and audits. This is a bold stance: it essentially presumes the worst of AI’s unpredictability. Likewise, Accountability requires embedding a developer’s digital signature in the AI’s code and naming a specific human “AI Handler” for each system. These prescriptions go beyond what most laws demand; they are innovative and enforceable (in theory) within contracts. The Explainability principle mandates that data fiduciaries be able to “provide clear and accessible reasons” for AI outputs, paralleling emerging regulatory calls for transparency. The book sensibly notes that if a deployer cannot explain an AI, liability may shift to the developer as a joint fiduciary. Under Responsibility, AI must demonstrably benefit data principals (individuals) and not just the company – requiring an “AI use justification” document showing a cost–benefit case. Security covers not only hacking risks but also AI-specific harms (e.g. “dark patterns” or “neurological manipulation”), recommending robust testing, liability clauses and even insurance against AI-caused harm. Finally, Ethics goes “beyond the law,” urging post-market monitoring (like the EU AI Act) and concepts like “data fading” (re-consent after each AI session).

In these six principles, the book demonstrates real depth. It does an excellent job mapping international ideas to India: e.g., it explicitly ties its “Responsibility” principle to OECD and UNESCO values, and it notes alignment with DPDPA’s own “fiduciary” ethos. The implementation specifications (not shown above) translate these principles into checklist items for deployers (and even developers). The approach is thorough and structured, and the decision to keep the framework tight (6 principles, 9 MIS) is a practical virtue. By focusing on compliance culture rather than hundreds of controls, the author aims to make adoption feasible.

Contributions to AI Governance and Compliance

This book makes a distinctive contribution to AI governance literature by centering India’s regulatory scene. Few existing works address AI under India’s data protection law; most global frameworks focus on EU, US or OECD models. Here, Naavi synthesizes global standards (OECD AI principles, UNESCO ethics, EU AI Act, ISO 42001, NIST RMF) and filters them through India’s lens. The result is a home-grown, India-specific prescription for AI compliance. The DGPSI-AI principles clearly mirror international best practices (e.g. explainability, accountability) while anchoring them in DPDPA duties. For compliance officers and legal teams in India, the framework offers a tangible roadmap: mandates to document training processes, conduct AI risk assessments, maintain kill-switches, and so on. For example, Naavi’s recommended Data Protection Impact Assessment for any “AI driven” process will resonate with practitioners already aware of DPIAs in the EU context.

In terms of risk mitigation, the book is forward-looking. It anticipates that data fiduciaries will increasingly use AI and that regulators will demand oversight. By recommending things like embedding code signatures and third-party audits, it pre-empts regulatory scrutiny. Its treatment of Agentic AI (Chapter 8) is also novel: Naavi correctly identifies that goal-driven AI agents pose additional risks at the planning level, and he advises a separate risk analysis and possibly a second DPIA for such systems. This shows innovation, as few compliance guides yet address multi-agent systems. Finally, the inclusion of guidance for AI developers (Chapter 9) is a valuable extension: although DGPSI-AI mainly targets deployers, Naavi provides a vendor questionnaire and draft MIS for AI suppliers (e.g. requiring explainability docs, kill switches). This hints at eventual alignment with standards like ISO/IEC 42001 (AI management) or NIST’s AI RMF. In short, the book’s contribution lies in melding AI governance with India’s data protection law in a structured way. It is unlikely that an AI developer or legal advisor working under India’s DPDPA would be fully prepared without considering such guidelines.

Strengths

  • Accessible Explanations: The book excels at clear, jargon-light explanations of complex AI ideas. It takes care to define terms (generative AI, agentic AI, narrow vs general AI) in plain language, making it readable for legal and compliance professionals.

  • Contextual Alignment: Naavi grounds every principle in Indian law and culture. For example, he links DPDPA’s fiduciary concept to traditional notions of trustee duty, and aligns “Responsibility” with OECD and UNESCO values. This ensures relevance to Indian readers.

  • Practical Guidance: The framework is deliberately concise (six principles, nine specifications) to avoid overwhelming users. It offers concrete tools: checklists, sample clauses (e.g. kill-switch clauses for contracts), and forms of DPIA. This hands-on focus is a major plus.

  • Innovative Coverage: Few works discuss agentic AI in a governance context, but this book does. It defines agentic AI behavior and stresses its higher risk, recommending separate oversight. Similarly, requiring “AI use justification documents” and insurance against AI harm shows creative thinking.

  • Holistic View: By surveying global standards (OECD, UNESCO, EU AI Act) and then distilling them into DGPSI-AI, the book situates India’s needs in the broader world. Its comparison of US state laws (California, Colorado) and Australia provides useful perspective on diverse approaches.

Critiques and Recommendations

  • Terminology Consistency: As the author himself notes, some technical terms are used loosely. For instance, “algorithm” vs “model” vs “AI platform” sometimes blur. Future editions could include a glossary or more precise definitions to avoid ambiguity.

  • Assumptions on AI Risk: The “Unknown Risk” principle assumes AI always behaves unpredictably and catastrophically. While caution is prudent, this might overstate the case for more deterministic AI (e.g. rule-based systems). A more nuanced risk taxonomy could prevent overclassifying every AI as “significant risk.”

  • Regulatory Speculation: Some content is lighthearted or speculative (e.g. a fictional “One Big Beautiful Bill Act” in the US chapter). While illustrative, such satire should be clearly marked or toned down in a formal review context. Future editions might stick to actual laws or clearly label hypothetical scenarios.

  • Emerging Standards Coverage: The book rightly cites ISO/IEC 42001 and the EU AI Act, but could expand on newer frameworks. For example, the NIST AI Risk Management Framework (released Jan 2023) is a major voluntary guideline for AI risk. Mentioning such standards (and perhaps IEEE ethics guidelines) would help readers connect DGPSI-AI to global practice.

  • Technical Depth vs. Accessibility: The trade-off between technical precision and readability is evident. Topics like model training, neural net vulnerabilities, or differential privacy receive little detail, which is fine for non-experts but may disappoint developers. Including appendices or references for deeper technical readers could improve balance.

  • Practical Examples: The book is largely conceptual. It would benefit from concrete case studies or examples of organizations applying DGPSI-AI. Scenarios showing how a company conducts an AI DPIA or negotiates liability clauses with a vendor would enhance the practical guidance.

Expert Verdict

Taming the twin challenges  of DPDPA and AI is a pioneering and timely resource for India’s emerging techno-legal landscape. Its formal tone and structured approach make it suitable for web publication and professional readership. Despite minor stylistic quibbles, the book’s depth of analysis on DPDPA obligations and AI governance is impressive. For AI developers and vendors, it provides valuable insight into the compliance expectations of Indian clients (e.g. explainability documentation, kill switches). For compliance professionals and corporate counsel, it offers a clear roadmap to integrate AI tools under India’s data protection regime. And for legal stakeholders and regulators, it suggests a concrete “best practice” framework (DGPSI-AI) that anticipates both legislative intent and technological evolution. In an environment where India’s DPDPA rules and global AI regulations (EU AI Act, NIST RMF) are still unfolding, Naavi’s book charts a proactive course. It should be considered essential reading for anyone building or deploying AI systems in India, or advising organizations on data protection. With the suggested refinements, future editions could make this guide even stronger, but even now it stands as a comprehensive contribution to the field.

18th August 2025

ChatGPT

Posted in Privacy | Leave a comment

Review of Book on DGPSI-AI by Perplexity

Overview

This comprehensive technical guide addresses one of the most pressing challenges facing organizations today: navigating the complex intersection of India’s Digital Personal Data Protection Act (DPDPA) 2023 and artificial intelligence governance. The book presents the Data Governance and Protection Standards Implementation for AI (DGPSI-AI) framework as a practical solution for organizations struggling to maintain compliance while leveraging AI technologies.

Core Thesis and Approach

The authors position their work around a fundamental premise: traditional data protection frameworks are insufficient for AI-driven personal data processing. The book argues that AI introduces “unknown risks” that require specialized governance frameworks beyond conventional GDPR-style compliance measures. The DGPSI-AI framework emerges as an extension of the base DGPSI methodology, specifically tailored for AI deployment scenarios.

Key Strengths

Practical Implementation Focus

Unlike many theoretical treatments of AI governance, this book excels in providing actionable guidance. The 50 Model Implementation Specifications (MIS) are particularly valuable, offering organizations concrete steps across five functional areas: Management (15 specifications), DPO responsibilities (17 specifications), Legal (5 specifications), HR (5 specifications), and Technology (8 specifications).

Process-Centric Compliance Model

The book’s “One Purpose-One Process” principle represents a significant advancement in data protection methodology. This approach enables organizations to move beyond entity-level classifications to process-specific risk assessments, allowing for more nuanced compliance strategies. The hybrid entity concept is particularly innovative, recognizing that organizations may simultaneously function as data fiduciaries, significant data fiduciaries, and data processors across different processes.

Global Regulatory Synthesis

The authors demonstrate impressive scholarship in synthesizing major international AI governance frameworks. The comparative analysis of OECD, UNESCO, EU AI Act, and ISO/IEC 42001 principles provides readers with a comprehensive understanding of the global regulatory landscape.

Technical Merit

AI Risk Assessment Framework

The book’s treatment of “unknown risk” as a core AI governance principle is conceptually sound. The recognition that AI systems can exhibit unpredictable behavior that distances itself from human developers addresses a genuine gap in traditional risk management approaches. The CICERO example—where Meta’s AI deliberately deceived human players—effectively illustrates these concerns.

Implementation Specifications

The 13 developer-focused MIS specifications show particular technical depth, addressing critical areas such as explainability documentation, kill switches, and tamper-proof controls. The requirement for “fading memory” parameters in AI learning systems demonstrates sophisticated understanding of AI behavior modification over time.

Areas for Improvement

Regulatory Assumptions

The book makes several assumptions about Indian regulatory development that may prove optimistic. The discussion of the “One Big Beautiful Bill Act” and its impact on US state regulations appears speculative and may not reflect actual legislative developments.

Technical Complexity vs. Accessibility

While the technical depth is commendable, the book may overwhelm organizations without significant technical expertise. The 50+ implementation specifications, while comprehensive, could benefit from clearer prioritization frameworks for resource-constrained organizations.

International Applicability

Despite claiming broader relevance, the framework remains heavily anchored in Indian regulatory context. Organizations operating in multiple jurisdictions may find limited guidance for harmonizing DGPSI-AI with other regional requirements.

Unique Contributions

Monetary Valuation of Data

The principle of assigning monetary value to personal data represents a novel approach to data governance. This economic perspective could transform how organizations approach data protection ROI calculations and resource allocation decisions.

Distributed Responsibility Model

The framework’s emphasis on distributed compliance responsibility, where every process owner becomes an effective compliance manager, offers a scalable alternative to centralized DPO models that often become bottlenecks in large organizations.

AI-Specific Privacy Notices

The requirement for explainability disclosures accompanying AI-driven privacy notices addresses a critical gap in current practice. Most organizations fail to adequately disclose AI involvement in personal data processing.

Practical Value

For compliance professionals, the book provides immediately actionable frameworks and checklists. The detailed MIS specifications can serve as compliance roadmaps, while the risk assessment methodologies offer structured approaches to AI governance.

For technology leaders, the developer-focused specifications provide clear guidance for AI procurement and deployment decisions. The emphasis on kill switches and tamper-proof controls reflects emerging best practices in AI safety.

For legal professionals, the synthesis of international frameworks and the practical interpretation of DPDPA requirements offer valuable insights for contract negotiation and regulatory strategy.

Limitations and Criticisms

The book’s treatment of AI sentience and “cyborg” risks may seem premature given current technological capabilities. While forward-thinking, these discussions risk undermining the framework’s credibility for more immediate, practical AI governance needs.

The documentation requirements, while thorough, may prove burdensome for smaller organizations or those with limited AI deployment. The framework would benefit from tiered implementation guidance based on organizational size and AI complexity.

Conclusion

“Taming the Twin Challenges of DPDPA and AI” succeeds in filling a critical gap in AI governance literature. The DGPSI-AI framework represents a meaningful advancement beyond generic data protection approaches, offering organizations a structured methodology for navigating AI-specific privacy challenges.

The book’s greatest strength lies in its practical orientation—moving beyond theoretical discussions to provide implementable solutions. While some aspects may prove overly complex for certain organizations, the core framework offers valuable guidance for any entity serious about responsible AI deployment.

Rating: 4/5 stars

This work earns high marks for its comprehensive approach, practical focus, and innovative thinking around AI governance. Despite some limitations in accessibility and international applicability, it represents essential reading for compliance professionals, technology leaders, and legal practitioners grappling with AI governance challenges.

Recommended for: Data protection officers, AI governance professionals, compliance managers, technology procurement teams, and legal professionals working at the intersection of AI and privacy law

18th August 2025

Perplexity Pro

Posted in Privacy | Leave a comment

New Book: Taming the twin challenges of DPDPA and AI , with DGPSI-AI

Following the theme of this year’s IDPS 2025, Naavi has released an extension to DGPSI framework of DPDPA Compliance as DGPSI-AI, as a framework for Data Fiduciaries deploying  AI.

In order to consolidate the thoughts of DGPSI-AI as  a book, Naavi is releasing a book by the title “Taming the twin challenges of DPDPA and AI…with DGPSI-AI”.

The Book contains Nine Chapters. As an introduction it discusses some of the the AI Concepts, the approach to AI Governance in EU and Non-EU Countries and a recollection of DGPSI.

It then introduces  the DGPSI-AI framework with Six principles and Nine Implementation specifications. and how it may be integrated with DGPSI at present.

To complete the discussion, a brief discussion would be available on Agentic  AI and DGPSI-AI at the Developer’s end.

Naavi acknowledges that AI is a complex technical subject and hence even attempting such a work is stretching the capabilities. However, without some guideline  of this sort, the Data Fiduciaries would be struggling to cope with the challenges of DPDPA Compliance and hence there is a need to provide some thoughts even if it requires refinement in the coming days.

The contents of the book will be discussed in detail during the IDPS 2025 which starts from September 17 at Bengaluru and continues with Chennai (September 27) and Mumbai, (Nov 1), Delhi (Nov 7) and Ahmedabad (Nov 14)  before concluding back with a closing event in Bengaluru by November 21.

Watch out for the availability of the book.

Naavi

Posted in Privacy | Leave a comment

Risk Management for DPDPA Compliance and Duty of a Data Processor

Abstract: This note tries to discuss why Privacy by design fails and compliance by design  is a better option to pursue, and why duties of a data processor needs to be recognized and what could be a suggested  “responsible Data Processor clause”

Privacy By Design Vs Compliance by Design

Since the days of GDPR, we have been using the terms “Privacy By Design” implying that the objective of GDPR is to protect “Privacy” of an individual.

However, “Privacy” is a concept  not fully defined. It is primarily the freedom of an individual to keep his  “State of Mind” free from external influence. This “State of Mind”  is dynamic and is not amenable to legislation unless at each step of interaction with an individual, a third party checks “Is this fine with your current state of mind?” , “How do I  recognize when you change your mind?” etc.

“Design” pre-supposes a base structure for “Privacy” and if “Privacy Risk” is indeterminable, “Privacy By design”  is not feasible except as a safety template.

Recognizing this fundamental hurdle, DGPSI framework adopted “Compliance by Design” as an objective. Being “Compliant” to the law ensures that the “Risk” associated with penalties is “Mitigated”.

Hence DPDPA Compliance is an exercise of “Non-Compliance Risk Mitigation” .

The Non-Compliance Risk may arise due  to Governance failures, Technology failures or Human  Failures. Governance can be defined, Technology can be designed and humans can be trained. These are the measures that can be considered as the basic level of Risk Mitigation.

However, human training has to be raised from the level of mere “Awareness” building to “Self Motivation to build a culture of Respect for Privacy”. Governance should be elevated from introducing lengthy policies to designing  practical, implementable procedures for management. Similarly the “Technology element” has  to take into account the  unpredictability of technology risks.

In the AI scenario, the “Unpredictable” AI-Risks are today creating a new challenge that is making “Compliance by Design” a challenge.

For some time since the rise of GUI based software like Windows,  Computerization has been growing in business circles with the belief that a user need not be a technology expert. Similarly use of AI based software should be free from the necessity of the user to know how the AI functions.

It is for this reason that DGPSI-AI believes that the fundamental  principles of AI Governance such as Bias prevention is like a bug to be fixed by the AI developers. It is the duty of the AI developer to ensure that AI does not generate false outputs either because the algorithm itself or  its learning data is faulty. The needs of “Explainability”, “Transparency”, “Accountability” are derivatives of the need for the AI to be free from error in its decision output.

Hence DGPSI-AI focusing on  “Compliance by Design” by the AI Deployer who is a Data Fiduciary expects all AI Risks to be absorbed by the  AI vendor/Licensor/Developer.

Currently, this is sought to be achieved  through the “Contractual Commitment”.

“Duty” is the essence of DPDPA

DPDPA is unique as a law since while balancing the Rights and Duties, it lays a strong emphasis on the principle of “Duties”. The Data Fiduciary is bound by a “Duty” not only as a Trustee of the Data Principal but also as a custodian of lawfulness of processing of personal data.

Section 4 of the Act specifying that a person may process personal data only for a lawful purpose extends to laws outside DPDPA. Though this applies to the definition of “Lawful purpose” which is limited to what is not expressly forbidden by law, the simultaneous operation of ITA 2000,  extends  the responsibility to “Prevention of harm to the society”. The “Due Diligence” aspect of the ITA 2000, lack of which will lead to the recognition of an “Offence” or “Civil Wrong” brings in a concept of “Duty to the society”.

The Data Principal himself is also expected to follow certain “Duties” under Section 15 of the DPDPA 2023 itself . One of the  duties cast upon the Data Principal under this section is to be compliant with the provisions of all applicable laws while exercising rights under DPDPA 2023.

Thus, both  the Data Fiduciary and the Data Principal have been imposed with the burden of a “Duty”. A “Joint Data Fiduciary” who in conjunction with another entity determines the purpose and means of processing, being a Data Fiduciary is also bound by the duty within the scope of his operation. The Consent Manager who is also a Data Fiduciary is bound by his duties.

In this context it has become necessary for us to raise a question on whether a “Data Processor” is also bound by some “Duty” of his own. There are two types of Data Processors whom we come across in the DPDPA scenario. They are the “Back End Data Processor” and the “Front End Data Processor”.

The “Front end data processor” interacts with the Data Principals on behalf of the Data Fiduciary. Most of them would be “Joint Data Fiduciaries” but it is technically feasible for them to be also “Data Principals”. The Back end data processors are “Undisclosed agents” (Unless the Privacy Notice specifically discloses their presence) who have influence on the outcome of processing but hide behind the Data Fiduciary. The “Duties” of these types of Data Processors need to be clarified for compliance purpose.

DPDPA 2023 does not seem to directly impose a “Duty” on the Data Processor. However, the definition of “Data Processor” is analogous to an “Agent” of a Data Fiduciary.

It is time we analyze the Indian Contract Act and study the law of agency in greater detail as to determine what are the liabilities of an agent (Data Processor) towards meeting the duties cast on the Principal.(Data Fiduciary).

The Agent has one set of duties to the Principal which is to “follow instructions of the principal as per the contract”, “To conduct his activities with reasonable  skill and diligence” etc.

At the same time the Principal has a duty to the Agent such as “Indemnifying the agent against lawful acts done in exercise of the authority done in good faith”.

When we look at the liability of an agent against a third party, the concept of “Disclosed” or “Undisclosed” agent takes effect. (Applicable for Front end data processor). In the case of an agent who has disclosed his representative role, the Principal (Data Fiduciary)  is liable to the third party (Data Principal). If the activity of the agent is as an “Undisclosed Agent”, the third party may have recourse to the agent also.

Such an agent can be the Data Processor or even the AI used by the Data Fiduciary or the Data Processor. This concept can be applied to the Data Processor when the processing contract involves an interaction with the third party.

When  an AI makes a decision and communicates to the data principal, the recourse of the data principal is against the Data Fiduciary as the disclosed party and also against the “Person who caused the AI system to behave in a particular manner” (Refer Section 11 of ITA 2000). Such a person is primarily the Data Fiduciary himself  or the Data Processor if the AI is used  by the Data Processor .

But if the AI usage is bound by a  software contract and the developer/Licensor of the AI has retained his own control over the code and functioning of the software, we may draw him into the liability chain.

Managing Risk Through a Model Contractual Clause

Hence the Data Fiduciary-Data Processor contract assumes importance to determine the liability of the Data Processor.

If the contract makes a statement such as

“The Data Processor shall be bound by the duties cast on the Data Fiduciary as per DPDPA read with Information Technology Act 2000 which includes processing of personal data in a lawful manner,  in compliance with all applicable laws and with due diligence and reasonable security practices. “

This clause can be called “Responsible Data Processor Clause” and is recommended for incorporation in all Data Processor contracts.

Since the Data Processor has the power to negotiate a contract where such responsibility may be  refused it is suggested that the MeitY through its recommendations re-iterate the link between the Indian Contract Act and the DPDPA and ensure that the Data Processors are not allowed to walk away without responsibility. Until such time, the above Responsible Data processor clause may be used in contracts.

(Comments welcome)

Naavi

Posted in Privacy | Leave a comment

Let DGPSI-AI guide the RBI and REs for FREE AI implementation

…Continued from the Previous post

The  Bhattacharyya committee report on FREE AI has a confusing name. First of all it is not a “Free” AI software as it may first imply. It is a framework for Responsible  and Ethical Enablement of Artificial Intelligence recommended by the Committee to RBI for its consideration in the Financial sector. RBI may consider it and decide how it can be adopted and actioned.

Basically this is a framework suggested for the RBI and inter-alia speaks about what the Government may do or the industry may do.

This is not to be considered as a “Framework of Compliance for AI deployers  or Developers”.

The REs who are the regulated entities under the RBI, consisting of Banks, NBFCs etc can watch out for RBI coming up with some binding guidelines following this report.

The Committee was constituted on December 6, 2024 with the following terms of reference.

i. To assess the current level of adoption of AI in financial services globally and in India.

ii. To review regulatory and supervisory approaches on AI with a focus on the financial sector globally.

iii. To identify potential risks associated with AI, if any, and recommend an evaluation, mitigation and monitoring framework and consequent compliance requirements for financial institutions, including banks, NBFCs, FinTechs, PSOs, etc.

iv. To recommend a framework including governance aspects for responsible, ethical adoption of AI models/ applications in the Indian financial sector.

v. Any other matter related to AI in the Indian financial sector

The Committee has come up with its recommendations based on Seven Sutras and 26 recommendations.

We have tried to summarize the contents of the report in the earlier articles ending with “Observations on the FREE AI Committee Report”

This framework is  not like the DGPSI-AI framework which is a framework for DPDPA Compliance by the Data Fiduciaries. It is also not a law on AI. It contains some provisions which have been listed  in our previous article suggesting 13 action points for the REs .

If any of the REs want to start acting on the implementation of the report at this stage without waiting for the RBI to issue its acceptance  circular, they can start with the development  of an AI policy at the Board level.

One of the action points suggest

AI System Governance Framework: 

REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning.

Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage.

REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.

In the above recommendation, the first two paragraphs relate to the AI developers and only the third para refers to the AI deployers. DGPSI-AI is the relevant framework for the REs for meeting this recommendation.

The framework makes reference to “Institutional product approval framework” which needs to be developed. REs need to also ensure Board approved consumer protection framework that prioritizes on transparency, fairness and accessible recourse mechanisms for customers. The DGPSI-AI addresses these requirements.

REs must also identify potential risks arising out their use of AI  which also is covered by DGPSI-AI.

The recommendations include creation of AI inventory and AI audit framework with internal audits, third party audits and periodical review. DGPSI-AI addresses these requirements.

The requirements also recommend that REs should include AI related disclosures in their annual reports and websites.

The DGPSI-AI audit would be the right tool to meet some of these requirements.

Though the DGPSI-AI framework of DPDPA compliance was developed independently without the knowledge of what was brewing in the committee, it was a coincidence that the DGPSI-AI framework was published right before the FREE AI Report was published.

I hope RBI will study DGPSI-AI framework and use it as part of the guidelines to RE.

Naavi

Posted in Privacy | Leave a comment

Quantum activities in India

It was a pleasant surprise today to find out that a few private sector companies in India have already made a breakthrough in Quantum computing. Heard Mr Nagendra Nagaraja of Bengaluru and his company qpiai.tech

It was also good to note that the company is  focused  on being a “Product  Company” and also support SME/MSMEs.

One of the objectives of the Indian Quantum Mission is to develop intermediate scale quantum computers with 50-100 physical qubits in 5 years and 50-1000 physical qubits in 8 years. It was nice to hear that Mr Nagaraj with his team has already developed a 25 qubit system and is planning to reach the 1000 qubit system target by 2029-2030.

The other objects of the quantum mission are

  • Establish satellite-based secure quantum communications over 2000 kilometers within India

  • Create inter-city quantum key distribution networks spanning 2000 km

  • Develop quantum sensors including magnetometers and atomic clocks for precision applications

The Government has also announced four thematic hubs for quantum research with IISC, Bengaluru being one of them along with IITs in Delhi, Mumbai and Chennai. DRDO is also collaborating with TCS and TIFR for the development of indigenous quantum processors. HCL and Tech Mahindra are also working on developing quantum software and algorithms.

Apart from the Quantum Research Park  and nearly 15 start ups in Bengaluru, a large Quantum tech park is envisaged in Amaravati, Andhra Pradesh.

The integration of Quantum and AI technologies may open opportunities in Quantum Machine Learning for enhanced pattern recognition, Accelerated ML model training  and advanced optimization algorithms.

Hopefully, India would be making huge strides in the field to catch up with countries like US and China in the near future.

We wish all the innovative entrepreneurs who are working in the Quantum plus AI field a grand success.

Naavi

PS: while trying to browse qpiai.tech, donot be confused with similar looking domain names such as qpai.tech. Wish both these domains  put up the “Lookalikes” disclosure.

Posted in Privacy | Leave a comment