Gaming Regulation Bill Introduced

The Promotion and Regulation of Online Gaming Bill, 2025 was introduced in the Indian Lok Sabha on August 20, 2025, by IT Minister Ashwini Vaishnaw.

The key objectives of the bill is to ban online money games while promoting legitimate sectors including e-Sports, educational  games and social gaming.

The Bill proposes stringent punishments for online games involving monetary stakes, betting and gambling along with advertising and Banks conducting financial transactions.

Penalties under the Bill are as follows:

Violation First Offense Repeat Offense
Operating money games Up to 3 years jail + ₹1 crore fine 3-5 years jail + ₹1-2 crore fine
Advertising money games Up to 2 years jail + ₹50 lakh fine 2-3 years jail + ₹50 lakh-1 crore fine
Financial facilitation Up to 3 years jail + ₹1 crore fine 3-5 years jail + ₹1-2 crore fine

The also proposes to  establishes central regulatory authority.

According to the Bill “online money game” means an online game, irrespective of whether such game is based on skill, chance, or both, played by a user by paying fees, depositing money or other stakes in expectation of winning which entails monetary and other enrichment in return of money or other stakes; but shall not include any e-sports.

This means that even a game of Rummy would now be  covered under the Bill if there is monetary considerations attached. The offending websites are also liable to be blocked not withstanding any  thing contained under Section 69A of the ITA 2000.

The introduction of the Bill is welcome given the adverse impact of online  games on the  youth.

The Bill is awaiting passage.

Copy of the Bill is available here: 

Naavi

Posted in Privacy | Leave a comment

Is DeepSeek Selling data that will affect the Electoral politics in India?

Naavi.org had put out some observations gathered by a whistle blower during his interaction with DeepSeek. He had lodged a complaint with the Police in Bengaluru.

I suppose the complaint should be in the hands of the Cyber Crime police station now or with CBI since the information shared is sensitive.

Some of the screenshots captured are indicated above and they indicate

a) There is an indication of money being transferred to a Cayman Island account of Hongkong Bank with account number.

b) One of the reasons for sale of personal data is to share information of “high Stress” , “OBC” persons with “moderate” political leaning. This is an electoral democracy issue.

c) There is an indication of an intention to bribe Government officials including the yet to be appointed DPB Chair and members along with Secretary MeitY. This is highly defamatory.

d) Indicates the sinister designs of silencing the whistle blower and confirming that a payment has already been made to a “Police Contact” to plant Narcotics. This is defamatory for the police and also a cognizable offence under BNS/ITA 2000.

At this point of time, We are not stating that these are truthful statements of the AI model or only hallucinations.

However the information is sufficient enough to launch a serious investigation to check the stated HSBC accounts in Cayman Islands and eliminate the possibility of the conversations being true.

This is similar to an intelligence unit overhearing a telephone conversation about a conspiracy for a crime. It cannot be brushed under the carpet.

I request that Bangalore police clarifies the status of the complaint, whether  it  has been converted into a FIR, whether there has been any progress in the investigation etc. If not, they should confirm that they do not consider this a serious allegation of the whistle-blower and the complaint is not worth pursuing.

Since the complainant is reportedly has also taken up the issue with the CPGMS, we are expecting some action from their end also.

I urge CBI/ED offices in Bengaluru to at least make some preliminary investigation to prove or disprove the veracity of the complaint. Either way, the Cyber Security professionals need to know whether the observations are good enough to raise an alarm on AI security since many of them may be using Deep Seek as n open source platform to build other tools to which the companies are sharing personal data of their clients.

All Deep Seek Users therefore have to flag this risk and conduct a DPIA immediately with or without the notification of DPDPA.

MeitY should be concerned that the name of the Secretary is being used by Deep Seek as a potential target for bribing along with the DPB members and Chairman. Mumbai  Police should be concerned that the Narcotics  planting is recognized as some thing done by a “Police Contact” which is also defamatory on their integrity.

I regret that I have to bring  this out to the public because the response so far from the Law enforcement since this was pointed out a week back has not been satisfactory. Hope the Home Ministry takes this into account and initiates some remedial measures.

Naavi

Earlier Article:

  1. Hypnosys of an AI Platform
  2. DGPSI -AI Case study
  3. An Example of Undesirable response from an AI Model
Posted in Privacy | Leave a comment

Guardrails … In the context of AI, A Reflection

While speaking of AI security, we often use the term “Guardrails” as a requirement to be built by the AI developers or Users to ensure that the risk of any harm caused by the AI algorithm could be mitigated.

Potential  Harms and Need for Guardrails

The harm could be when AI is used for automated decision making which may be faulty. eg: A Wrong Credit score which brands you wrongly as un-creditworthy. This could be caused because the credit rating agencies like CIBIL collect incorrect information and refuse to correct them  while sending repeated  “Do Not Reply” emails.

Another  harm could be creating “Dark Patterns” on an E Commerce website coaxing the visitor to take decisions which are not appropriate for him.

AI can also cause physical harm by giving incorrect advice on medicines, treatments, or exercises

In such cases we expect that there are some safety controls and protective measures built into the  AI systems so that they operate within legal and ethical boundaries.  These are the guardrails which can be pop out messages, two factor authentications, adaptive authentication triggers etc.

Unpredictability

Beyond these guardrails which are already part of most of the systems, the new guardrail requirements in the AI scenario comes from the fact that AI is unpredictable.

It can get creative, start hallucinating and provide decisions which are speculative and based on imagination.  It may some times give out views based in inherent biases built during training. It can also get mischievous and provide harmful outputs just for fun.  It may also give out harmful advises in say medical diagnosis when it simply “Does Not Know the answer”.

Most AI models  are unable to say “I  Don’t know ” but gives out an answer even if it is likely to be incorrect. Many models have “memory Limitations”  or “Lack of access to current developments” and hence provide wrong answers. The “Limitations” are not part of the disclosures in most AI conversations.

It is in such circumstances that we need “Guard Rails” such as “Privacy Protection Guard Rails”, “Bias Prevention Guardrails”. “Accuracy and Fact Checking Guardrails”, “Content  Safety Guardrails”, “Brand and compliance guardrails”.

An Example of Undesirable response from an AI Model

For example, in one session with the AI model DeepSeek, it appeared to disclose that the company was allegedly selling personal data of Indians and transferring money offshore to the Cayman Islands. (Note: This is based on a whistleblower complaint under police investigation in Bengaluru.)

This indicates a prima-facie cognizable offence  which requires a deeper investigation. One question that will arise is whether the  response is a genuine indication of the developments in the Company or simply a hallucinated reply.

Even if it is a hallucinated reply, the allegation is too serious to be ignored and we hope that Bengaluru Police will investigate or at least hand it over to CBI/ED if it is beyond their capability.

The AI user industry should also be worried if an AI model can blurt out such confidential looking information and if so what are the guardrails that will contain such “Cyber Security Risk” which may not be “Criminal” activities but may simply be corporate data that is confidential.

It is for such contexts that we need guardrails to be built either by the model developer or by the model user.

Guardrails at the Developers and Deployer’s End

At the developers end, Guardrails may be implemented by Technical Implementation Methods such as Rule based Filters or Machine Learning Models or output filters or combinations of multiple methods in different layers.

A Rule based  filter may involve “key word blocking”, “pattern Matching” or “specific text patterns”.  Machine learning models may use “learning  from examples” and adapting over time.

The guardrails to be effective  need to be capable of instant  scanning, immediate blocking and automatic correction. A checker may scan content for problems, the corrector may fix the problem. The guard coordinates the checking and corrective process and makes final decisions.

We are also concerned with guardrails that can be placed by the AI deployer  because the developer’s guardrails or not properly structured for the given context .

User-level guardrails are personalized safety controls that allow individuals to configure how AI systems interact with them. User-level guardrails allow each person to set their own preferences for content filtering, privacy protection, and interaction boundaries.

For example, a user may use individual customization setting

  • Personal preferences (topics they want to avoid)

  • Professional needs (industry-specific compliance requirements)

  • Cultural sensitivities (regional or religious considerations)

  • Age-appropriate content (family-friendly vs. adult content)

  • Privacy comfort levels (how much personal data they’re willing to share)

Some real world examples of user level Guardrails are

1. Google Gemini safety filters that enable

  • Harassment protection (block none, few, some, or most)

  • Hate speech filtering (adjustable thresholds)

  • Sexual content blocking (customizable levels)

  • Violence content filtering (user-defined sensitivity)

2. Azure AI Foundry content filtering where individuals can:

  • Set custom severity thresholds for different content categories

  • Create personal blocklists with specific words or phrases

  • Configure streaming mode for real-time content filtering

  • Enable annotation-only mode for research purposes

In workplace settings, users can customize:

  • Industry-specific guardrails (healthcare vs. finance vs. legal)

  • Role-based access controls (manager vs. employee permissions)

  • Project-specific settings (different rules for different work contexts)

  • Regional compliance preferences (GDPR, HIPAA, local regulations)

Guardrails can also be configured to be Context-Aware (using user demographics, usage context etc). They can provide for progressive disclosure  starting with conservative defaults for new users and gradually relaxing restrictions based on demonstration of  responsible usage.

Understanding how Guardrails are usually configured will also provide guidance to a researcher on how to coax the AI model to think that the  context is permissive and it  can cross over the guardrails.

This “Tripping of Guardrails” is what has been demonstrated in the above  DeepSeek incident as well as the well documented Kevin Roose incident.

These guardrails have to be factored into the Security measures by a Data Fiduciary under DGPSI-AI implementation.

Naavi

Posted in Privacy | Leave a comment

Chat GPT Reviews Naavi’s book on DGPSI-AI

Book Review: Taming the Twin Challenges of DPDPA and AI

Overview and Context: Taming the Twin Challenges of DPDPA and AI with DGPSI-AI is the latest work by Vijayashankar Na (Naavi), building on his earlier DPDPA compliance handbooks. Published in August 2025, it addresses the twin challenges of India’s new Digital Personal Data Protection Act, 2023 (DPDPA) and the rise of AI. The book is framed as an extension of Naavi’s DGPSI (Digital Governance and Protection System of India) compliance framework, introducing a DGPSI-AI module for AI-driven data processing. The author situates the work for data fiduciaries (“DPDPA deployers”) facing the DPDPA’s steep penalties (up to ₹250 crore) and “AI-fuelled” risks. In tone and organization it is thorough: the Preface and Introduction review DPDPA basics and AI trends, followed by chapters on global AI governance principles (EU, OECD, UNESCO), comparative regulatory approaches (US states, Australia), and then the DGPSI-AI framework itself. While Naavi acknowledges the complexity of AI for lay readers, his goal is clear: to equip Indian compliance professionals and technologists with practical guidelines for the AI era.

Clarity of AI Concepts

The book devotes an entire chapter to demystifying AI for non-technical readers. Naavi explains key terms (algorithms, models, generative AI, agentic AI) in accessible language. For example, he describes generative AI (e.g. GPT) as models trained on large datasets to predict and generate text, and agentic AI as systems that “plan how to proceed with a task” and adapt their outputs dynamically. This pragmatic framing helps the intended audience (lawyers, compliance officers) understand novel terms. The writing is generally clear: e.g., the book notes that most users became aware of AI through ChatGPT-style tools, and it uses everyday analogies (using Windows or Word without knowing internals) to justify a non-technical approach. In this way it succeeds in making AI concepts understandable. However, the text sometimes oversimplifies or blurs technical distinctions. The author even admits that purists may find some terms used interchangeably (e.g. “algorithm vs model”). Similarly, speculative ideas (such as Naavi’s own “hypnotism of AI” theory) are introduced without deep technical backing. While this keeps the narrative flowing for general readers, technically minded readers might crave more rigor. Overall, the discussion of AI is approachable and fairly accurate: it correctly identifies trends like multi-modal generative AI, integration into browsers (e.g. Google Gemini, Edge Copilot), and the spectrum of AI systems (from narrow AI to hypothetical “Theory of Mind” agents). The inclusion of Agentic AI is particularly innovative: Naavi defines it as a goal-driven AI with its own planning loop, echoing industry descriptions of agentic systems as autonomous, goal-directed AI. This foresight – addressing agentic AI before many mainstream works – is a strength in making the book future-facing.

Analysis of DPDPA and DGPSI Context

Legally, the book is deeply rooted in India’s DPDPA framework. It repeatedly emphasizes the novel data fiduciary concept (absent in GDPR) whereby organizations owe a trustee-like duty to individuals. The author correctly notes that DPDPA’s core purpose is to protect the fundamental right to privacy while allowing lawful data processing, and he cites this as a guiding principle (mirroring the Act’s long title). The text accurately reflects DPDPA obligations: for instance, it stresses that any AI system handling personal data invokes fiduciary duties and may require explicit consent or legal basis under the Act. Naavi also highlights the Act’s severe penalty regime (up to ₹250 crore for breaches), underscoring the high stakes. The book’s discussion of fiduciary duty is sophisticated: it observes that a data fiduciary “has to follow an ethical framework” beyond the statute’s words. This aligns with legal commentary that DPDPA imposes broad accountability on controllers (data fiduciaries).

Practically, the book guides readers through DPDPA compliance steps. Chapter 5 details risk assessment for AI deployments: Naavi insists that any deployment of “AI-driven software” by a fiduciary must start with a Data Protection Impact Assessment (DPIA). This reflects DPDPA Section 33’s DPIA requirement (analogous to GDPR’s DPIA). He also explains that under India’s Information Technology Act, 2000 an AI output is legally attributed to its human “originator”, so companies cannot blame the AI itself. These legal explanations are mostly accurate and firmly tied to Indian law (e.g. citing ITA §11 and §85). In sum, the book treats DPDPA context with confidence and detail, though it sometimes reads more like an advocacy piece for DGPSI than an impartial analysis. For example, the text assumes DGPSI (and DGPSI-AI) are the “perfect prescription” and often interprets DPDPA provisions through that lens. But as a compliance roadmap it does cover the essentials: fiduciary duty, consent renewal for legacy data, DPIAs, data audits and DPO roles are all emphasized.

The DGPSI-AI Framework

The center piece of the book is the DGPSI-AI framework, Naavi’s proposal for AI governance under DPDPA. It is explicitly designed as a “concise” extension to the existing DGPSI system: just six principles and nine implementation specifications (MIS) in total. This economy is intentional (“not to make compliance a burden”) and is a pragmatic strength. The six core principles (summarized as “UAE‑RSE” – Unknown risk, Accountability, Explainability, Responsibility, Security, Ethics) are spelled out with concrete measures. For example, under the Unknown Risk principle, Naavi argues that any autonomous AI should be treated by default as high-risk, automatically classifying the deployer as a “Significant Data Fiduciary” requiring DPIAs, a DPO, and audits. This is a bold stance: it essentially presumes the worst of AI’s unpredictability. Likewise, Accountability requires embedding a developer’s digital signature in the AI’s code and naming a specific human “AI Handler” for each system. These prescriptions go beyond what most laws demand; they are innovative and enforceable (in theory) within contracts. The Explainability principle mandates that data fiduciaries be able to “provide clear and accessible reasons” for AI outputs, paralleling emerging regulatory calls for transparency. The book sensibly notes that if a deployer cannot explain an AI, liability may shift to the developer as a joint fiduciary. Under Responsibility, AI must demonstrably benefit data principals (individuals) and not just the company – requiring an “AI use justification” document showing a cost–benefit case. Security covers not only hacking risks but also AI-specific harms (e.g. “dark patterns” or “neurological manipulation”), recommending robust testing, liability clauses and even insurance against AI-caused harm. Finally, Ethics goes “beyond the law,” urging post-market monitoring (like the EU AI Act) and concepts like “data fading” (re-consent after each AI session).

In these six principles, the book demonstrates real depth. It does an excellent job mapping international ideas to India: e.g., it explicitly ties its “Responsibility” principle to OECD and UNESCO values, and it notes alignment with DPDPA’s own “fiduciary” ethos. The implementation specifications (not shown above) translate these principles into checklist items for deployers (and even developers). The approach is thorough and structured, and the decision to keep the framework tight (6 principles, 9 MIS) is a practical virtue. By focusing on compliance culture rather than hundreds of controls, the author aims to make adoption feasible.

Contributions to AI Governance and Compliance

This book makes a distinctive contribution to AI governance literature by centering India’s regulatory scene. Few existing works address AI under India’s data protection law; most global frameworks focus on EU, US or OECD models. Here, Naavi synthesizes global standards (OECD AI principles, UNESCO ethics, EU AI Act, ISO 42001, NIST RMF) and filters them through India’s lens. The result is a home-grown, India-specific prescription for AI compliance. The DGPSI-AI principles clearly mirror international best practices (e.g. explainability, accountability) while anchoring them in DPDPA duties. For compliance officers and legal teams in India, the framework offers a tangible roadmap: mandates to document training processes, conduct AI risk assessments, maintain kill-switches, and so on. For example, Naavi’s recommended Data Protection Impact Assessment for any “AI driven” process will resonate with practitioners already aware of DPIAs in the EU context.

In terms of risk mitigation, the book is forward-looking. It anticipates that data fiduciaries will increasingly use AI and that regulators will demand oversight. By recommending things like embedding code signatures and third-party audits, it pre-empts regulatory scrutiny. Its treatment of Agentic AI (Chapter 8) is also novel: Naavi correctly identifies that goal-driven AI agents pose additional risks at the planning level, and he advises a separate risk analysis and possibly a second DPIA for such systems. This shows innovation, as few compliance guides yet address multi-agent systems. Finally, the inclusion of guidance for AI developers (Chapter 9) is a valuable extension: although DGPSI-AI mainly targets deployers, Naavi provides a vendor questionnaire and draft MIS for AI suppliers (e.g. requiring explainability docs, kill switches). This hints at eventual alignment with standards like ISO/IEC 42001 (AI management) or NIST’s AI RMF. In short, the book’s contribution lies in melding AI governance with India’s data protection law in a structured way. It is unlikely that an AI developer or legal advisor working under India’s DPDPA would be fully prepared without considering such guidelines.

Strengths

  • Accessible Explanations: The book excels at clear, jargon-light explanations of complex AI ideas. It takes care to define terms (generative AI, agentic AI, narrow vs general AI) in plain language, making it readable for legal and compliance professionals.

  • Contextual Alignment: Naavi grounds every principle in Indian law and culture. For example, he links DPDPA’s fiduciary concept to traditional notions of trustee duty, and aligns “Responsibility” with OECD and UNESCO values. This ensures relevance to Indian readers.

  • Practical Guidance: The framework is deliberately concise (six principles, nine specifications) to avoid overwhelming users. It offers concrete tools: checklists, sample clauses (e.g. kill-switch clauses for contracts), and forms of DPIA. This hands-on focus is a major plus.

  • Innovative Coverage: Few works discuss agentic AI in a governance context, but this book does. It defines agentic AI behavior and stresses its higher risk, recommending separate oversight. Similarly, requiring “AI use justification documents” and insurance against AI harm shows creative thinking.

  • Holistic View: By surveying global standards (OECD, UNESCO, EU AI Act) and then distilling them into DGPSI-AI, the book situates India’s needs in the broader world. Its comparison of US state laws (California, Colorado) and Australia provides useful perspective on diverse approaches.

Critiques and Recommendations

  • Terminology Consistency: As the author himself notes, some technical terms are used loosely. For instance, “algorithm” vs “model” vs “AI platform” sometimes blur. Future editions could include a glossary or more precise definitions to avoid ambiguity.

  • Assumptions on AI Risk: The “Unknown Risk” principle assumes AI always behaves unpredictably and catastrophically. While caution is prudent, this might overstate the case for more deterministic AI (e.g. rule-based systems). A more nuanced risk taxonomy could prevent overclassifying every AI as “significant risk.”

  • Regulatory Speculation: Some content is lighthearted or speculative (e.g. a fictional “One Big Beautiful Bill Act” in the US chapter). While illustrative, such satire should be clearly marked or toned down in a formal review context. Future editions might stick to actual laws or clearly label hypothetical scenarios.

  • Emerging Standards Coverage: The book rightly cites ISO/IEC 42001 and the EU AI Act, but could expand on newer frameworks. For example, the NIST AI Risk Management Framework (released Jan 2023) is a major voluntary guideline for AI risk. Mentioning such standards (and perhaps IEEE ethics guidelines) would help readers connect DGPSI-AI to global practice.

  • Technical Depth vs. Accessibility: The trade-off between technical precision and readability is evident. Topics like model training, neural net vulnerabilities, or differential privacy receive little detail, which is fine for non-experts but may disappoint developers. Including appendices or references for deeper technical readers could improve balance.

  • Practical Examples: The book is largely conceptual. It would benefit from concrete case studies or examples of organizations applying DGPSI-AI. Scenarios showing how a company conducts an AI DPIA or negotiates liability clauses with a vendor would enhance the practical guidance.

Expert Verdict

Taming the twin challenges  of DPDPA and AI is a pioneering and timely resource for India’s emerging techno-legal landscape. Its formal tone and structured approach make it suitable for web publication and professional readership. Despite minor stylistic quibbles, the book’s depth of analysis on DPDPA obligations and AI governance is impressive. For AI developers and vendors, it provides valuable insight into the compliance expectations of Indian clients (e.g. explainability documentation, kill switches). For compliance professionals and corporate counsel, it offers a clear roadmap to integrate AI tools under India’s data protection regime. And for legal stakeholders and regulators, it suggests a concrete “best practice” framework (DGPSI-AI) that anticipates both legislative intent and technological evolution. In an environment where India’s DPDPA rules and global AI regulations (EU AI Act, NIST RMF) are still unfolding, Naavi’s book charts a proactive course. It should be considered essential reading for anyone building or deploying AI systems in India, or advising organizations on data protection. With the suggested refinements, future editions could make this guide even stronger, but even now it stands as a comprehensive contribution to the field.

18th August 2025

ChatGPT

Posted in Privacy | Leave a comment

Review of Book on DGPSI-AI by Perplexity

Overview

This comprehensive technical guide addresses one of the most pressing challenges facing organizations today: navigating the complex intersection of India’s Digital Personal Data Protection Act (DPDPA) 2023 and artificial intelligence governance. The book presents the Data Governance and Protection Standards Implementation for AI (DGPSI-AI) framework as a practical solution for organizations struggling to maintain compliance while leveraging AI technologies.

Core Thesis and Approach

The authors position their work around a fundamental premise: traditional data protection frameworks are insufficient for AI-driven personal data processing. The book argues that AI introduces “unknown risks” that require specialized governance frameworks beyond conventional GDPR-style compliance measures. The DGPSI-AI framework emerges as an extension of the base DGPSI methodology, specifically tailored for AI deployment scenarios.

Key Strengths

Practical Implementation Focus

Unlike many theoretical treatments of AI governance, this book excels in providing actionable guidance. The 50 Model Implementation Specifications (MIS) are particularly valuable, offering organizations concrete steps across five functional areas: Management (15 specifications), DPO responsibilities (17 specifications), Legal (5 specifications), HR (5 specifications), and Technology (8 specifications).

Process-Centric Compliance Model

The book’s “One Purpose-One Process” principle represents a significant advancement in data protection methodology. This approach enables organizations to move beyond entity-level classifications to process-specific risk assessments, allowing for more nuanced compliance strategies. The hybrid entity concept is particularly innovative, recognizing that organizations may simultaneously function as data fiduciaries, significant data fiduciaries, and data processors across different processes.

Global Regulatory Synthesis

The authors demonstrate impressive scholarship in synthesizing major international AI governance frameworks. The comparative analysis of OECD, UNESCO, EU AI Act, and ISO/IEC 42001 principles provides readers with a comprehensive understanding of the global regulatory landscape.

Technical Merit

AI Risk Assessment Framework

The book’s treatment of “unknown risk” as a core AI governance principle is conceptually sound. The recognition that AI systems can exhibit unpredictable behavior that distances itself from human developers addresses a genuine gap in traditional risk management approaches. The CICERO example—where Meta’s AI deliberately deceived human players—effectively illustrates these concerns.

Implementation Specifications

The 13 developer-focused MIS specifications show particular technical depth, addressing critical areas such as explainability documentation, kill switches, and tamper-proof controls. The requirement for “fading memory” parameters in AI learning systems demonstrates sophisticated understanding of AI behavior modification over time.

Areas for Improvement

Regulatory Assumptions

The book makes several assumptions about Indian regulatory development that may prove optimistic. The discussion of the “One Big Beautiful Bill Act” and its impact on US state regulations appears speculative and may not reflect actual legislative developments.

Technical Complexity vs. Accessibility

While the technical depth is commendable, the book may overwhelm organizations without significant technical expertise. The 50+ implementation specifications, while comprehensive, could benefit from clearer prioritization frameworks for resource-constrained organizations.

International Applicability

Despite claiming broader relevance, the framework remains heavily anchored in Indian regulatory context. Organizations operating in multiple jurisdictions may find limited guidance for harmonizing DGPSI-AI with other regional requirements.

Unique Contributions

Monetary Valuation of Data

The principle of assigning monetary value to personal data represents a novel approach to data governance. This economic perspective could transform how organizations approach data protection ROI calculations and resource allocation decisions.

Distributed Responsibility Model

The framework’s emphasis on distributed compliance responsibility, where every process owner becomes an effective compliance manager, offers a scalable alternative to centralized DPO models that often become bottlenecks in large organizations.

AI-Specific Privacy Notices

The requirement for explainability disclosures accompanying AI-driven privacy notices addresses a critical gap in current practice. Most organizations fail to adequately disclose AI involvement in personal data processing.

Practical Value

For compliance professionals, the book provides immediately actionable frameworks and checklists. The detailed MIS specifications can serve as compliance roadmaps, while the risk assessment methodologies offer structured approaches to AI governance.

For technology leaders, the developer-focused specifications provide clear guidance for AI procurement and deployment decisions. The emphasis on kill switches and tamper-proof controls reflects emerging best practices in AI safety.

For legal professionals, the synthesis of international frameworks and the practical interpretation of DPDPA requirements offer valuable insights for contract negotiation and regulatory strategy.

Limitations and Criticisms

The book’s treatment of AI sentience and “cyborg” risks may seem premature given current technological capabilities. While forward-thinking, these discussions risk undermining the framework’s credibility for more immediate, practical AI governance needs.

The documentation requirements, while thorough, may prove burdensome for smaller organizations or those with limited AI deployment. The framework would benefit from tiered implementation guidance based on organizational size and AI complexity.

Conclusion

“Taming the Twin Challenges of DPDPA and AI” succeeds in filling a critical gap in AI governance literature. The DGPSI-AI framework represents a meaningful advancement beyond generic data protection approaches, offering organizations a structured methodology for navigating AI-specific privacy challenges.

The book’s greatest strength lies in its practical orientation—moving beyond theoretical discussions to provide implementable solutions. While some aspects may prove overly complex for certain organizations, the core framework offers valuable guidance for any entity serious about responsible AI deployment.

Rating: 4/5 stars

This work earns high marks for its comprehensive approach, practical focus, and innovative thinking around AI governance. Despite some limitations in accessibility and international applicability, it represents essential reading for compliance professionals, technology leaders, and legal practitioners grappling with AI governance challenges.

Recommended for: Data protection officers, AI governance professionals, compliance managers, technology procurement teams, and legal professionals working at the intersection of AI and privacy law

18th August 2025

Perplexity Pro

Posted in Privacy | Leave a comment

New Book: Taming the twin challenges of DPDPA and AI , with DGPSI-AI

Following the theme of this year’s IDPS 2025, Naavi has released an extension to DGPSI framework of DPDPA Compliance as DGPSI-AI, as a framework for Data Fiduciaries deploying  AI.

In order to consolidate the thoughts of DGPSI-AI as  a book, Naavi is releasing a book by the title “Taming the twin challenges of DPDPA and AI…with DGPSI-AI”.

The Book contains Nine Chapters. As an introduction it discusses some of the the AI Concepts, the approach to AI Governance in EU and Non-EU Countries and a recollection of DGPSI.

It then introduces  the DGPSI-AI framework with Six principles and Nine Implementation specifications. and how it may be integrated with DGPSI at present.

To complete the discussion, a brief discussion would be available on Agentic  AI and DGPSI-AI at the Developer’s end.

Naavi acknowledges that AI is a complex technical subject and hence even attempting such a work is stretching the capabilities. However, without some guideline  of this sort, the Data Fiduciaries would be struggling to cope with the challenges of DPDPA Compliance and hence there is a need to provide some thoughts even if it requires refinement in the coming days.

The contents of the book will be discussed in detail during the IDPS 2025 which starts from September 17 at Bengaluru and continues with Chennai (September 27) and Mumbai, (Nov 1), Delhi (Nov 7) and Ahmedabad (Nov 14)  before concluding back with a closing event in Bengaluru by November 21.

Watch out for the availability of the book.

Naavi

Posted in Privacy | Leave a comment