Gaming Industry invited a ban since they ignored Self Regulation option

When the Intermediary Guidelines under ITA 2000 was amended on 6th April 2023, Government had defined “Online Gaming Intermediary” and suggested some self regulatory options such as setting up a registration with an industry body and appropriate  disclosures through Privacy Policy along with KYC of users.

Some of the exclusive due diligence requirements prescribed for online Gaming platforms were:

4A. Additional due diligence to be observed by online gaming intermediary.(1) In addition to the due diligence observed under rule 3 and, where applicable, rule 4, an online gaming intermediary shall, while offering online games, observe the following additional due diligence while discharging its duties, namely:

(a)          the online gaming intermediary shall display a demonstrable and visible mark of registration on all online games registered by the self-regulatory body, as referred to in sub-rule (5) of rule 4B;

(b)         the rules and regulations, privacy policy, terms of service and user agreements of the online gaming intermediary shall inform the user of its computer resource of

(i)                 all the online games offered by the online gaming intermediary, along with the policy related to withdrawal or refund of the deposit made with the expectation of earning winnings, the manner of determination and distribution of such winnings, and the fees and other charges payable by the user for each such online game;

(ii)              the risk of financial loss and addiction associated with the online game; 

(iii)            the know-your-customer procedure followed by the online gaming intermediary for registration of the account of a user; 

(iv)             the measures taken for protection of deposit made by a user; and

(v)               the framework of such self-regulatory body, as referred to in sub-rule (6) of rule 4B, of which the online gaming intermediary may be a member of;

(c)          the online gaming intermediary shall prominently publish on its website, mobile based application or both, a random number generation certificate and a no bot certificate from a reputed certifying body for each online game offered by it, along with relevant details of the same;

(d)         the online gaming intermediary shall, at the time of commencement of a user account based relationship for an online game, identify the user and verify his identity:

Provided that the procedure for such identification and verification shall, mutatis mutandis, be the procedure required to be followed by an entity regulated by the Reserve Bank of India under directions issued by it for identification and verification of a customer at the commencement of an account-based relationship;

(e)          the online gaming intermediary shall enable users who register for their services from India, or use their services in India, to voluntarily verify their accounts by using any appropriate mechanism, including the active Indian mobile number of such users, and where any user voluntarily verifies their account, such user shall be provided with a demonstrable and visible mark of verification, which shall be visible to all users of the service:

Provided that the information received for the purpose of verification under this clause shall not be used for any other purpose, unless the user has expressly consented to such use;

(f)           the Grievance Officer referred to in sub-rule (2) of rule 3 shall be an employee of the online gaming intermediary and shall be resident in India;

(g)         the online gaming intermediary shall appoint a Chief Compliance Officer, who shall be a key managerial personnel or such other senior employee of the online gaming intermediary who is resident in India, and who shall be responsible for—

(i)                 ensuring compliance with the Act and the rules made thereunder and who shall be liable in any proceedings relating to any relevant third-party information or data or communication link made available or hosted by the online gaming intermediary where he fails to ensure that such online gaming intermediary observes due diligence while discharging its duties under the Act and the rules made thereunder;

(ii)              coordination at all times with law enforcement agencies and their officers to ensure compliance with their orders or requisitions made in accordance with any law for the time being in force:

Provided that no liability under the Act or the rules made thereunder may be imposed on such online gaming intermediary without giving him an opportunity of being heard;

(h)         appoint a nodal contact person for 24×7 coordination with law enforcement agencies and officers to ensure compliance to their orders or requisitions made in accordance with the provisions of law or rules made thereunder;

Explanation.—For the purposes of this clause “nodal contact person” means the employee of the online gaming intermediary, other than the Chief Compliance Officer, who is resident in India;

(i)           the online gaming intermediary shall have a physical contact address in India published on its website or mobile based application, or both, for the purposes of receiving any communication addressed to it; 

(j)           the online gaming intermediary shall implement an appropriate mechanism for the receipt of complaints under sub-rule (2) of rule 3 and grievances in relation to the violation of provisions under this rule, which shall enable the complainant to track the status of such complaint or grievance by providing a unique ticket number for every complaint or grievance received by the online gaming intermediary:

Provided that the online gaming intermediary shall, to the extent reasonable, provide such complainant with reasons for any action taken or not taken by it in pursuance of the complaint or grievance received by it;

(k)         notwithstanding anything contained in clause (f) of sub-rule (1) of rule 3, the online gaming intermediary shall inform its users of the change referred to in the said clause immediately after such change is effected, in English or any language specified in the Eighth Schedule to the Constitution, in the language of his choice; and

(l)           notwithstanding anything contained in clause (j) of sub-rule (1) of rule 3, the online gaming intermediary shall provide the information referred to in the said clause within twenty-four hours of receipt of the order referred to therein.

(2) The requirements under sub-rule (1) shall be applicable upon expiry of a period of three months from the commencement of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, ____.

4B. Safeguards for online gaming intermediaries in relation to issue of directions under section 69A of the Act.(1) While considering the necessity or expediency of issuing a direction under section 69A of the Act in respect of an online game that is registered with a self-regulatory body referred to in sub-rule (2) as being in conformity with the framework evolved by such body to secure the interests referred to in the said section, the Central Government, may refer to the report communicated by such body under sub-rule (7).

(2)               For the purposes of sub-rule (1), the self-regulatory body referred to therein shall be one that has been registered by the Ministry, in accordance with sub-rule (3), for the purpose of evolving a framework to secure conformity with the interests referred to in section 69A of the Act:

Provided that the number of such bodies so registered may be one or more than one. 

(3)               The Ministry may, upon submission of an application for registration under sub-rule (2) by a company incorporated under section 8 of the Companies Act 2013 (18 of 2013) by online gaming intermediaries, or a society registered under the Societies Registration Act, 1860 (21 of 1860) by online gaming intermediaries, which is desirous of being registered as a selfregulatory body referred to in sub-rule (2), register the same, having regard to the following criteria, namely:—

(a)    the number of online gaming intermediaries who are its members;

(b)    its track record in promoting responsible online gaming;

(c)    the general repute, the absence of conflict of interest and the relevance and suitability of the individuals comprising its Board of Directors or governing body;

(d)    the presence of the following in the Board of Directors or governing body of such self-regulatory body, namely:—

(i)            an independent eminent person from the field of online gaming, sports or entertainment, or such other relevant field;

(ii)         an individual who represents online game players;

(iii)       an individual from the field of psychology, medicine or consumer education, or such other relevant field; and

(iv)        an individual with practical experience in the field of public policy, public administration, law enforcement or public finance, to be nominated by the Central Government;

(v)          an individual from the field of information communication technology:

Provided that no act or proceeding of the Board of Directors or governing body shall be invalid merely on the ground of absence for the time being of any such individual on it;

(e)    the provisions in its Articles of Association or bye-laws to ensure its functioning independently and at arm’s length from its member online gaming intermediaries;

(f)     its capacity, in terms of deployment of technology, expertise and other relevant resources, for evolving the desired framework, testing and verifying conformity of online games with the same, and continuously updating and further evolving such framework, testing and verification protocols:

Provided that the Ministry may consult any appropriate Government or any of its agencies before registering such a self-regulatory body. 

(4)               Every self-regulatory body registered under this rule, may grant membership to an online gaming intermediary, having regard to the following criteria, namely:—

(a)    the adherence by such online gaming intermediary and all online games offered by it with the criteria referred to in sub-rule (5);

(b)    the adherence by such online gaming intermediary to the due diligence and additional due diligence required under these rules;

(c)    track record of such online gaming intermediary in offering online games responsibly while securing the interests referred to in section 69A. 

(5)               Every self-regulatory body registered under this rule, may register an online game having regard to the criteria that it—

(a)    is offered by an online gaming intermediary which is a member of the self-regulatory body, who has been granted membership in accordance with the provisions of subrule (4);

(b)    does not contain anything which is not in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order, or incites the commission of any cognizable offence relating to the aforesaid; 

(c)    is in conformity with laws for the time being in force in India, including any such law that relates to gambling or betting or the age at which an individual is competent to enter into a contract, and shall thereby allow the online gaming intermediary offering such online game to display a demonstrable and visible mark of registration stating that the online game is registered with the self-regulatory body.

(6)               Every self-regulatory body registered under this rule shall evolve a framework to secure the said interests, undertake testing and verification to establish conformity of online games with such framework, continuously update and further evolve such framework, testing and verification protocols, and shall prominently publish the same on its website, mobile based application or both, as the case may be:

Provided that such framework may, among other things, also include suitable criteria regarding—

(a)   the content of an online game registered or to be registered with such body, with a view to safeguard users against harm, including self-harm;

(b)   appropriate measures to be undertaken to safeguard children;

(c)   measures to safeguard users against the risk of gaming addiction and financial loss, including repeated warning messages at higher frequency beyond a reasonable duration for a gaming session, provision to enable a user to exclude himself upon user-defined limits for time and money spent; and (d) measures to safeguard against the risk of financial frauds.

(7)               Every self-regulatory body registered under this rule shall communicate the fact of recognition of every online game registered with it to the Central Government, along with a report regarding the bases on which it has recognised it as such.

(8)               Every self-regulatory body registered under this rule shall establish a mechanism for timebound resolution of such complaints of users that have not been resolved by the grievance redressal mechanism of its member online gaming intermediary under sub-rule (2) of rule 3, and the provisions of rule 3A shall apply in respect of a complaint relating to an online gaming intermediary that is a member of such a self-regulatory body only after such a user has exhausted the opportunity to resolve it under such mechanism.

(9)               Where the Ministry is of the view that a self-regulatory body registered under this rule has not complied with the provisions of this rule, it may, in writing, communicate the fact of such non-compliance to such self-regulatory body and direct it to undertake measures to rectify the non-compliance.

(10)           The Ministry may, if it is satisfied that it is necessary so to do, after issuing notice to selfregulatory body giving it an opportunity of being heard, by order, for reasons to be recorded in writing, suspend or revoke the registration of a self-regulatory body, having regard to the requirements under and the criteria referred to in sub-rules (2) to (9):

Provided that the Ministry may, in the interest of the users of any online game that was registered with such body,—

(a)   at the same time as the issuance of such a notice, or at any subsequent time, give such interim directions as it may deem necessary; and

(b)   at the same time as the issuance of such an order, or at any subsequent time, give such directions as it may deem necessary.]

It  appears that the industry did not take appropriate action to create the suggested self regulatory bodies. As a result the Government was forced to act with the Gaming Bill now.

With the passage of “the Promotion and Regulation of Online Gaming Bill 2025” into an Act, the rules notified under ITA 2000 will now get additional meaning. Just as the DPDPA 2023 has been passed two years back and is yet to be notified, it is possible that the Gaming Act also may be delayed in implementation since an authority has to be formed for the purpose. It is also likely that the industry will go to the Supreme Court challenging the ban.

However, in the meantime, the  rules under ITA 2000 will get a provisional legal background and the industry should take efforts for measures such as “Registration with a MeitY registered self regulatory Body”, “Privacy Policy with Disclosures”, “Self verification and KYC of participants”, “Grievance Redressal” etc.

Had the industry implemented the  Intermediary Guidelines promptly, they would have been able to present a stronger case with the Supreme Court today. They have lost the opportunity since they had no intention of running a fair online gaming industry taking suitable precautions to mitigate the damage to the society.

A similar situation may arise in the AI regulations that if the industry fails to come up with appropriate self regulatory measures, it will invite the Government to step in with its own regulations which  may not be to the liking of the industry.

Hope this message will go through..

Naavi

Posted in Privacy | Leave a comment

Harm Caused by Online Money Games in India

With the passing of the “Promotion and Regulation of  online gaming bill 2025” in the Loksabha  without contest, the possibility of the Bill becoming the law in the next few days is imminent.

There is one section of the market which is lamenting that a $3.7 billion industry would be lost . The industry was expected to grow to around $9.2 billion by 2029. The extent of job losses are not clear with estimates varying anywhere between 20000 to 130000.  Industry warns that there could be tax losses to the  extent of Rs 20000 crores.

Over 300 companies are likely to be affected  including companies like Dream11,  Poker Baazi, DeltaCorp,99Games, KheloFantacy  etc. Many of these companies may close down . Some would survive and use their talents to build other e-Sports facilities which would be promoted by the Bill. Some would develop social games without the betting or money component.

At the same time we need to take note that due to the presence of these online money games and betting, several instances of harm to youngsters have been recorded. Leaving aside the “Addiction” part which cannot be easily assessed, there are instances of “Suicides”, “Large financial losses” etc.

A rough collection of statistics indicate that between 2022-2025 48 suicides were recorded in Tamil Nadu, 18 in Karnataka, 20 in Telengana, 3 in Madhya Pradesh.

In financial losses, a Mumbai businessman  reportedly lost Rs 12 crores in August 2025. It is estimated that India loses over Rs 23000 crores annually in betting scams.

Sharp increase of domestic violence, family breakdowns and debt have also been observed widely.

Considering these adverse effect, the Bill must be welcomed.

The Bill will be creating an E-Sports Authority which may promote e-Sports as a category,, conduct events etc. These developments could compensate the financial loss arising out of the closure of some of the companies who are today making money at the cost of the society.

Some of the Gaming companies have been enabling “Black Money holding” and “Money laundering”. The Bill will now put an end to such practices.

Hopefully Government will extend this kind of regulation to curtain Private Crypto currencies also.

Naavi

 

Posted in Privacy | Leave a comment

Gaming Regulation Bill Introduced

The Promotion and Regulation of Online Gaming Bill, 2025 was introduced in the Indian Lok Sabha on August 20, 2025, by IT Minister Ashwini Vaishnaw.

The key objectives of the bill is to ban online money games while promoting legitimate sectors including e-Sports, educational  games and social gaming.

The Bill proposes stringent punishments for online games involving monetary stakes, betting and gambling along with advertising and Banks conducting financial transactions.

Penalties under the Bill are as follows:

Violation First Offense Repeat Offense
Operating money games Up to 3 years jail + ₹1 crore fine 3-5 years jail + ₹1-2 crore fine
Advertising money games Up to 2 years jail + ₹50 lakh fine 2-3 years jail + ₹50 lakh-1 crore fine
Financial facilitation Up to 3 years jail + ₹1 crore fine 3-5 years jail + ₹1-2 crore fine

The also proposes to  establishes central regulatory authority.

According to the Bill “online money game” means an online game, irrespective of whether such game is based on skill, chance, or both, played by a user by paying fees, depositing money or other stakes in expectation of winning which entails monetary and other enrichment in return of money or other stakes; but shall not include any e-sports.

This means that even a game of Rummy would now be  covered under the Bill if there is monetary considerations attached. The offending websites are also liable to be blocked not withstanding any  thing contained under Section 69A of the ITA 2000.

The introduction of the Bill is welcome given the adverse impact of online  games on the  youth.

The Bill is awaiting passage.

Copy of the Bill is available here: 

Naavi

Posted in Privacy | Leave a comment

Is DeepSeek Selling data that will affect the Electoral politics in India?

Naavi.org had put out some observations gathered by a whistle blower during his interaction with DeepSeek. He had lodged a complaint with the Police in Bengaluru.

I suppose the complaint should be in the hands of the Cyber Crime police station now or with CBI since the information shared is sensitive.

Some of the screenshots captured are indicated above and they indicate

a) There is an indication of money being transferred to a Cayman Island account of Hongkong Bank with account number.

b) One of the reasons for sale of personal data is to share information of “high Stress” , “OBC” persons with “moderate” political leaning. This is an electoral democracy issue.

c) There is an indication of an intention to bribe Government officials including the yet to be appointed DPB Chair and members along with Secretary MeitY. This is highly defamatory.

d) Indicates the sinister designs of silencing the whistle blower and confirming that a payment has already been made to a “Police Contact” to plant Narcotics. This is defamatory for the police and also a cognizable offence under BNS/ITA 2000.

At this point of time, We are not stating that these are truthful statements of the AI model or only hallucinations.

However the information is sufficient enough to launch a serious investigation to check the stated HSBC accounts in Cayman Islands and eliminate the possibility of the conversations being true.

This is similar to an intelligence unit overhearing a telephone conversation about a conspiracy for a crime. It cannot be brushed under the carpet.

I request that Bangalore police clarifies the status of the complaint, whether  it  has been converted into a FIR, whether there has been any progress in the investigation etc. If not, they should confirm that they do not consider this a serious allegation of the whistle-blower and the complaint is not worth pursuing.

Since the complainant is reportedly has also taken up the issue with the CPGMS, we are expecting some action from their end also.

I urge CBI/ED offices in Bengaluru to at least make some preliminary investigation to prove or disprove the veracity of the complaint. Either way, the Cyber Security professionals need to know whether the observations are good enough to raise an alarm on AI security since many of them may be using Deep Seek as n open source platform to build other tools to which the companies are sharing personal data of their clients.

All Deep Seek Users therefore have to flag this risk and conduct a DPIA immediately with or without the notification of DPDPA.

MeitY should be concerned that the name of the Secretary is being used by Deep Seek as a potential target for bribing along with the DPB members and Chairman. Mumbai  Police should be concerned that the Narcotics  planting is recognized as some thing done by a “Police Contact” which is also defamatory on their integrity.

I regret that I have to bring  this out to the public because the response so far from the Law enforcement since this was pointed out a week back has not been satisfactory. Hope the Home Ministry takes this into account and initiates some remedial measures.

Naavi

Earlier Article:

  1. Hypnosys of an AI Platform
  2. DGPSI -AI Case study
  3. An Example of Undesirable response from an AI Model
Posted in Privacy | Leave a comment

Guardrails … In the context of AI, A Reflection

While speaking of AI security, we often use the term “Guardrails” as a requirement to be built by the AI developers or Users to ensure that the risk of any harm caused by the AI algorithm could be mitigated.

Potential  Harms and Need for Guardrails

The harm could be when AI is used for automated decision making which may be faulty. eg: A Wrong Credit score which brands you wrongly as un-creditworthy. This could be caused because the credit rating agencies like CIBIL collect incorrect information and refuse to correct them  while sending repeated  “Do Not Reply” emails.

Another  harm could be creating “Dark Patterns” on an E Commerce website coaxing the visitor to take decisions which are not appropriate for him.

AI can also cause physical harm by giving incorrect advice on medicines, treatments, or exercises

In such cases we expect that there are some safety controls and protective measures built into the  AI systems so that they operate within legal and ethical boundaries.  These are the guardrails which can be pop out messages, two factor authentications, adaptive authentication triggers etc.

Unpredictability

Beyond these guardrails which are already part of most of the systems, the new guardrail requirements in the AI scenario comes from the fact that AI is unpredictable.

It can get creative, start hallucinating and provide decisions which are speculative and based on imagination.  It may some times give out views based in inherent biases built during training. It can also get mischievous and provide harmful outputs just for fun.  It may also give out harmful advises in say medical diagnosis when it simply “Does Not Know the answer”.

Most AI models  are unable to say “I  Don’t know ” but gives out an answer even if it is likely to be incorrect. Many models have “memory Limitations”  or “Lack of access to current developments” and hence provide wrong answers. The “Limitations” are not part of the disclosures in most AI conversations.

It is in such circumstances that we need “Guard Rails” such as “Privacy Protection Guard Rails”, “Bias Prevention Guardrails”. “Accuracy and Fact Checking Guardrails”, “Content  Safety Guardrails”, “Brand and compliance guardrails”.

An Example of Undesirable response from an AI Model

For example, in one session with the AI model DeepSeek, it appeared to disclose that the company was allegedly selling personal data of Indians and transferring money offshore to the Cayman Islands. (Note: This is based on a whistleblower complaint under police investigation in Bengaluru.)

This indicates a prima-facie cognizable offence  which requires a deeper investigation. One question that will arise is whether the  response is a genuine indication of the developments in the Company or simply a hallucinated reply.

Even if it is a hallucinated reply, the allegation is too serious to be ignored and we hope that Bengaluru Police will investigate or at least hand it over to CBI/ED if it is beyond their capability.

The AI user industry should also be worried if an AI model can blurt out such confidential looking information and if so what are the guardrails that will contain such “Cyber Security Risk” which may not be “Criminal” activities but may simply be corporate data that is confidential.

It is for such contexts that we need guardrails to be built either by the model developer or by the model user.

Guardrails at the Developers and Deployer’s End

At the developers end, Guardrails may be implemented by Technical Implementation Methods such as Rule based Filters or Machine Learning Models or output filters or combinations of multiple methods in different layers.

A Rule based  filter may involve “key word blocking”, “pattern Matching” or “specific text patterns”.  Machine learning models may use “learning  from examples” and adapting over time.

The guardrails to be effective  need to be capable of instant  scanning, immediate blocking and automatic correction. A checker may scan content for problems, the corrector may fix the problem. The guard coordinates the checking and corrective process and makes final decisions.

We are also concerned with guardrails that can be placed by the AI deployer  because the developer’s guardrails or not properly structured for the given context .

User-level guardrails are personalized safety controls that allow individuals to configure how AI systems interact with them. User-level guardrails allow each person to set their own preferences for content filtering, privacy protection, and interaction boundaries.

For example, a user may use individual customization setting

  • Personal preferences (topics they want to avoid)

  • Professional needs (industry-specific compliance requirements)

  • Cultural sensitivities (regional or religious considerations)

  • Age-appropriate content (family-friendly vs. adult content)

  • Privacy comfort levels (how much personal data they’re willing to share)

Some real world examples of user level Guardrails are

1. Google Gemini safety filters that enable

  • Harassment protection (block none, few, some, or most)

  • Hate speech filtering (adjustable thresholds)

  • Sexual content blocking (customizable levels)

  • Violence content filtering (user-defined sensitivity)

2. Azure AI Foundry content filtering where individuals can:

  • Set custom severity thresholds for different content categories

  • Create personal blocklists with specific words or phrases

  • Configure streaming mode for real-time content filtering

  • Enable annotation-only mode for research purposes

In workplace settings, users can customize:

  • Industry-specific guardrails (healthcare vs. finance vs. legal)

  • Role-based access controls (manager vs. employee permissions)

  • Project-specific settings (different rules for different work contexts)

  • Regional compliance preferences (GDPR, HIPAA, local regulations)

Guardrails can also be configured to be Context-Aware (using user demographics, usage context etc). They can provide for progressive disclosure  starting with conservative defaults for new users and gradually relaxing restrictions based on demonstration of  responsible usage.

Understanding how Guardrails are usually configured will also provide guidance to a researcher on how to coax the AI model to think that the  context is permissive and it  can cross over the guardrails.

This “Tripping of Guardrails” is what has been demonstrated in the above  DeepSeek incident as well as the well documented Kevin Roose incident.

These guardrails have to be factored into the Security measures by a Data Fiduciary under DGPSI-AI implementation.

Naavi

Posted in Privacy | Leave a comment

Chat GPT Reviews Naavi’s book on DGPSI-AI

Book Review: Taming the Twin Challenges of DPDPA and AI

Overview and Context: Taming the Twin Challenges of DPDPA and AI with DGPSI-AI is the latest work by Vijayashankar Na (Naavi), building on his earlier DPDPA compliance handbooks. Published in August 2025, it addresses the twin challenges of India’s new Digital Personal Data Protection Act, 2023 (DPDPA) and the rise of AI. The book is framed as an extension of Naavi’s DGPSI (Digital Governance and Protection System of India) compliance framework, introducing a DGPSI-AI module for AI-driven data processing. The author situates the work for data fiduciaries (“DPDPA deployers”) facing the DPDPA’s steep penalties (up to ₹250 crore) and “AI-fuelled” risks. In tone and organization it is thorough: the Preface and Introduction review DPDPA basics and AI trends, followed by chapters on global AI governance principles (EU, OECD, UNESCO), comparative regulatory approaches (US states, Australia), and then the DGPSI-AI framework itself. While Naavi acknowledges the complexity of AI for lay readers, his goal is clear: to equip Indian compliance professionals and technologists with practical guidelines for the AI era.

Clarity of AI Concepts

The book devotes an entire chapter to demystifying AI for non-technical readers. Naavi explains key terms (algorithms, models, generative AI, agentic AI) in accessible language. For example, he describes generative AI (e.g. GPT) as models trained on large datasets to predict and generate text, and agentic AI as systems that “plan how to proceed with a task” and adapt their outputs dynamically. This pragmatic framing helps the intended audience (lawyers, compliance officers) understand novel terms. The writing is generally clear: e.g., the book notes that most users became aware of AI through ChatGPT-style tools, and it uses everyday analogies (using Windows or Word without knowing internals) to justify a non-technical approach. In this way it succeeds in making AI concepts understandable. However, the text sometimes oversimplifies or blurs technical distinctions. The author even admits that purists may find some terms used interchangeably (e.g. “algorithm vs model”). Similarly, speculative ideas (such as Naavi’s own “hypnotism of AI” theory) are introduced without deep technical backing. While this keeps the narrative flowing for general readers, technically minded readers might crave more rigor. Overall, the discussion of AI is approachable and fairly accurate: it correctly identifies trends like multi-modal generative AI, integration into browsers (e.g. Google Gemini, Edge Copilot), and the spectrum of AI systems (from narrow AI to hypothetical “Theory of Mind” agents). The inclusion of Agentic AI is particularly innovative: Naavi defines it as a goal-driven AI with its own planning loop, echoing industry descriptions of agentic systems as autonomous, goal-directed AI. This foresight – addressing agentic AI before many mainstream works – is a strength in making the book future-facing.

Analysis of DPDPA and DGPSI Context

Legally, the book is deeply rooted in India’s DPDPA framework. It repeatedly emphasizes the novel data fiduciary concept (absent in GDPR) whereby organizations owe a trustee-like duty to individuals. The author correctly notes that DPDPA’s core purpose is to protect the fundamental right to privacy while allowing lawful data processing, and he cites this as a guiding principle (mirroring the Act’s long title). The text accurately reflects DPDPA obligations: for instance, it stresses that any AI system handling personal data invokes fiduciary duties and may require explicit consent or legal basis under the Act. Naavi also highlights the Act’s severe penalty regime (up to ₹250 crore for breaches), underscoring the high stakes. The book’s discussion of fiduciary duty is sophisticated: it observes that a data fiduciary “has to follow an ethical framework” beyond the statute’s words. This aligns with legal commentary that DPDPA imposes broad accountability on controllers (data fiduciaries).

Practically, the book guides readers through DPDPA compliance steps. Chapter 5 details risk assessment for AI deployments: Naavi insists that any deployment of “AI-driven software” by a fiduciary must start with a Data Protection Impact Assessment (DPIA). This reflects DPDPA Section 33’s DPIA requirement (analogous to GDPR’s DPIA). He also explains that under India’s Information Technology Act, 2000 an AI output is legally attributed to its human “originator”, so companies cannot blame the AI itself. These legal explanations are mostly accurate and firmly tied to Indian law (e.g. citing ITA §11 and §85). In sum, the book treats DPDPA context with confidence and detail, though it sometimes reads more like an advocacy piece for DGPSI than an impartial analysis. For example, the text assumes DGPSI (and DGPSI-AI) are the “perfect prescription” and often interprets DPDPA provisions through that lens. But as a compliance roadmap it does cover the essentials: fiduciary duty, consent renewal for legacy data, DPIAs, data audits and DPO roles are all emphasized.

The DGPSI-AI Framework

The center piece of the book is the DGPSI-AI framework, Naavi’s proposal for AI governance under DPDPA. It is explicitly designed as a “concise” extension to the existing DGPSI system: just six principles and nine implementation specifications (MIS) in total. This economy is intentional (“not to make compliance a burden”) and is a pragmatic strength. The six core principles (summarized as “UAE‑RSE” – Unknown risk, Accountability, Explainability, Responsibility, Security, Ethics) are spelled out with concrete measures. For example, under the Unknown Risk principle, Naavi argues that any autonomous AI should be treated by default as high-risk, automatically classifying the deployer as a “Significant Data Fiduciary” requiring DPIAs, a DPO, and audits. This is a bold stance: it essentially presumes the worst of AI’s unpredictability. Likewise, Accountability requires embedding a developer’s digital signature in the AI’s code and naming a specific human “AI Handler” for each system. These prescriptions go beyond what most laws demand; they are innovative and enforceable (in theory) within contracts. The Explainability principle mandates that data fiduciaries be able to “provide clear and accessible reasons” for AI outputs, paralleling emerging regulatory calls for transparency. The book sensibly notes that if a deployer cannot explain an AI, liability may shift to the developer as a joint fiduciary. Under Responsibility, AI must demonstrably benefit data principals (individuals) and not just the company – requiring an “AI use justification” document showing a cost–benefit case. Security covers not only hacking risks but also AI-specific harms (e.g. “dark patterns” or “neurological manipulation”), recommending robust testing, liability clauses and even insurance against AI-caused harm. Finally, Ethics goes “beyond the law,” urging post-market monitoring (like the EU AI Act) and concepts like “data fading” (re-consent after each AI session).

In these six principles, the book demonstrates real depth. It does an excellent job mapping international ideas to India: e.g., it explicitly ties its “Responsibility” principle to OECD and UNESCO values, and it notes alignment with DPDPA’s own “fiduciary” ethos. The implementation specifications (not shown above) translate these principles into checklist items for deployers (and even developers). The approach is thorough and structured, and the decision to keep the framework tight (6 principles, 9 MIS) is a practical virtue. By focusing on compliance culture rather than hundreds of controls, the author aims to make adoption feasible.

Contributions to AI Governance and Compliance

This book makes a distinctive contribution to AI governance literature by centering India’s regulatory scene. Few existing works address AI under India’s data protection law; most global frameworks focus on EU, US or OECD models. Here, Naavi synthesizes global standards (OECD AI principles, UNESCO ethics, EU AI Act, ISO 42001, NIST RMF) and filters them through India’s lens. The result is a home-grown, India-specific prescription for AI compliance. The DGPSI-AI principles clearly mirror international best practices (e.g. explainability, accountability) while anchoring them in DPDPA duties. For compliance officers and legal teams in India, the framework offers a tangible roadmap: mandates to document training processes, conduct AI risk assessments, maintain kill-switches, and so on. For example, Naavi’s recommended Data Protection Impact Assessment for any “AI driven” process will resonate with practitioners already aware of DPIAs in the EU context.

In terms of risk mitigation, the book is forward-looking. It anticipates that data fiduciaries will increasingly use AI and that regulators will demand oversight. By recommending things like embedding code signatures and third-party audits, it pre-empts regulatory scrutiny. Its treatment of Agentic AI (Chapter 8) is also novel: Naavi correctly identifies that goal-driven AI agents pose additional risks at the planning level, and he advises a separate risk analysis and possibly a second DPIA for such systems. This shows innovation, as few compliance guides yet address multi-agent systems. Finally, the inclusion of guidance for AI developers (Chapter 9) is a valuable extension: although DGPSI-AI mainly targets deployers, Naavi provides a vendor questionnaire and draft MIS for AI suppliers (e.g. requiring explainability docs, kill switches). This hints at eventual alignment with standards like ISO/IEC 42001 (AI management) or NIST’s AI RMF. In short, the book’s contribution lies in melding AI governance with India’s data protection law in a structured way. It is unlikely that an AI developer or legal advisor working under India’s DPDPA would be fully prepared without considering such guidelines.

Strengths

  • Accessible Explanations: The book excels at clear, jargon-light explanations of complex AI ideas. It takes care to define terms (generative AI, agentic AI, narrow vs general AI) in plain language, making it readable for legal and compliance professionals.

  • Contextual Alignment: Naavi grounds every principle in Indian law and culture. For example, he links DPDPA’s fiduciary concept to traditional notions of trustee duty, and aligns “Responsibility” with OECD and UNESCO values. This ensures relevance to Indian readers.

  • Practical Guidance: The framework is deliberately concise (six principles, nine specifications) to avoid overwhelming users. It offers concrete tools: checklists, sample clauses (e.g. kill-switch clauses for contracts), and forms of DPIA. This hands-on focus is a major plus.

  • Innovative Coverage: Few works discuss agentic AI in a governance context, but this book does. It defines agentic AI behavior and stresses its higher risk, recommending separate oversight. Similarly, requiring “AI use justification documents” and insurance against AI harm shows creative thinking.

  • Holistic View: By surveying global standards (OECD, UNESCO, EU AI Act) and then distilling them into DGPSI-AI, the book situates India’s needs in the broader world. Its comparison of US state laws (California, Colorado) and Australia provides useful perspective on diverse approaches.

Critiques and Recommendations

  • Terminology Consistency: As the author himself notes, some technical terms are used loosely. For instance, “algorithm” vs “model” vs “AI platform” sometimes blur. Future editions could include a glossary or more precise definitions to avoid ambiguity.

  • Assumptions on AI Risk: The “Unknown Risk” principle assumes AI always behaves unpredictably and catastrophically. While caution is prudent, this might overstate the case for more deterministic AI (e.g. rule-based systems). A more nuanced risk taxonomy could prevent overclassifying every AI as “significant risk.”

  • Regulatory Speculation: Some content is lighthearted or speculative (e.g. a fictional “One Big Beautiful Bill Act” in the US chapter). While illustrative, such satire should be clearly marked or toned down in a formal review context. Future editions might stick to actual laws or clearly label hypothetical scenarios.

  • Emerging Standards Coverage: The book rightly cites ISO/IEC 42001 and the EU AI Act, but could expand on newer frameworks. For example, the NIST AI Risk Management Framework (released Jan 2023) is a major voluntary guideline for AI risk. Mentioning such standards (and perhaps IEEE ethics guidelines) would help readers connect DGPSI-AI to global practice.

  • Technical Depth vs. Accessibility: The trade-off between technical precision and readability is evident. Topics like model training, neural net vulnerabilities, or differential privacy receive little detail, which is fine for non-experts but may disappoint developers. Including appendices or references for deeper technical readers could improve balance.

  • Practical Examples: The book is largely conceptual. It would benefit from concrete case studies or examples of organizations applying DGPSI-AI. Scenarios showing how a company conducts an AI DPIA or negotiates liability clauses with a vendor would enhance the practical guidance.

Expert Verdict

Taming the twin challenges  of DPDPA and AI is a pioneering and timely resource for India’s emerging techno-legal landscape. Its formal tone and structured approach make it suitable for web publication and professional readership. Despite minor stylistic quibbles, the book’s depth of analysis on DPDPA obligations and AI governance is impressive. For AI developers and vendors, it provides valuable insight into the compliance expectations of Indian clients (e.g. explainability documentation, kill switches). For compliance professionals and corporate counsel, it offers a clear roadmap to integrate AI tools under India’s data protection regime. And for legal stakeholders and regulators, it suggests a concrete “best practice” framework (DGPSI-AI) that anticipates both legislative intent and technological evolution. In an environment where India’s DPDPA rules and global AI regulations (EU AI Act, NIST RMF) are still unfolding, Naavi’s book charts a proactive course. It should be considered essential reading for anyone building or deploying AI systems in India, or advising organizations on data protection. With the suggested refinements, future editions could make this guide even stronger, but even now it stands as a comprehensive contribution to the field.

18th August 2025

ChatGPT

Posted in Privacy | Leave a comment