Insurance Company tries to avoid claim by Privacy Infringement

A Case has been reported from Surat where a health Insurance Company 9Go Digit General Insurance) has tried to deny a claim of an insured for the reason that his “Google Time line” did not indicate his presence in the hospital.

It should be appreciated that the Consumer forum of Valsad has however provided relief to the insured.

However, the incident has thrown some questions on how did the insurance company accessed the Google Time line and what was the reliability of the Google Time line.

First of all it was foolish for the Insurance company to have relied on the Google Time line since the person could have simply left the mobile at his residence and not taken it to the hospital.

Secondly it is plausible that the Insurance investigators accessed the mobile of the claimant without consent and extracted the data. This is “Unauthorized Access” to data and an offence under Section 66 of ITA 2000. I suppose the Police would file an appropriate case against the company and proceed against the persons responsible invoking Section 85 of ITA 2000 also.

Naavi

Posted in Cyber Law | Leave a comment

How AI affects Brain Development

The AI Chair of FDPPI is trying to conduct a study of how the teaching of AI in children alters the development of the brain. It is a hypothesis that human brain re-wires itself if any part of the brain is not used fully. It is for this reason that this generation has been losing memory as is evident in our inability to remember phone numbers because it is readily available in the mobile address book or lose the visual mapping of the city because we depend on the Google Maps.

With the use of AI, we are entering a new phase where certain functionalities of the brain are getting adversely affected even in adults who use AI extensively in job situations. As per a report in Indian Express Dr Prabash Prabhakaran, a medical practitioner in Chennai (Senior Consultant and Director of neurology, SIMS Hospital) has reported that a case was observed where a software professional reported that she felt “mentally lazy and lost the curiosity to learn and do things herself rather than finding some body else to do it”.

Dr Prabash attributes this to AI over use and continuous outsourcing of our ability to think, remember and make decisions on our own.

Naavi has been pointing out this for last several years and has even suggested that India needs a “Neuro Rights Law” that limits the ability of Computer Influence on the Neural system of humans. An attempt is also being made to take a study of how the development of brains in children gets affected as we teach AI to them in the early years of their mental development. The views of Dr Prabash are validating this hypothesis that AI use as well as any other Computer interface that alters the behaviour of human brain may leave long term impact on the human and hence needs to be regulated.

While regulation of neuro rights or the use of dark patterns or AI over use are matters of law to be tackled later, Dr Prabash suggests the following remedies to be followed .

  1. Intentional recall: Before searching, take a moment to try and recall
  2. Active Participation: Dn’t repace your ideas with AI; Use it to test them
  3. Mental Exercises: Include deep reading, crossword puzzles and logic games
  4. Tech sabbaticals: unplug frequently to allow your thoughts to roam

We should thank Dr Prabash to have highlighted this aspect of AI usage.

Naavi

Posted in Cyber Law | Leave a comment

Temporary glitch in naavi.org access

Inadvertently there was a delay in the renewal of domain name naavi.org and for a few hours the website was not accessible.

I suppose it is now back in action. Kindly refresh your browser and try if you are still having a problem.

I regret the inconvenience caused.

Naavi

Posted in Cyber Law | Leave a comment

Bracing for Impact…The Twin Challenge

This year’s IDPS 2025 will have the umbrella theme “Bracing for Impact…The twin challenge”.  It was intended to be an year in which we will have discussions on Technology Solutions for DPDPA Compliance” for which we created the concept of “Special Associate Members” who will participate in the events.

In IDPS 2024 we had already addressed DGPSI as a framework for compliance and the “AI Chair of FDPPI” which is a recent development at FDPPI has taken up the task of creating a Guideline document DGPSI-AI to provide a more detailed instruction for DPDPA compliance using DGPSI framework. This will be discussed during the IDPS 2025. This will emerge as a solution to Data Fiduciaries to work on DPDPA compliance in the AI environment.

For “Data Fiduciaries”, the challenge in 2025-26 is not limited to DPDPA adoption but also to manage DPDPA Compliance in the AI environment.

The recent incidents involving Replit or Cursor AI highlight the risks to Data Fiduciaries when they use AI. When these risks manifest in the DPDPA era, the Data Fiduciaries will be simultaneously facing the combined effect of two major developments one in the law and another in the technology environment. This is the “Twin Challenge” that requires being addressed through IDPS 2025.

Hence the theme has been fine tuned to reflect that “Bracing for Impact” is against the twin challenges.

As for the AI developers, some guidelines are available now in the form of ISO 42001 and ISO 42005 but for AI deployers, DGPSI-AI will be the Go-To framework of compliance.  Hence this topic will also be covered during the event.

Additionally, we are seeing a new act “EU Data Use Act” becoming effective from 12th September 2025 which along with GDPR makes a splash in the EU/UK jurisdiction. IDPS 2025 will address this topic also.

FDPPI continues its focus on SME/MSMEs this year also and hence some sectoral impact issues related to SME/MSME sector  as well as vulnerable sectors like the Health, BFSI and Education will also be discussed during the IDPS 2025.

One of the objectives is to generate a “Sectoral Representative Action” in the form of setting up SIGs and Special reports for sharing with the DPB (when formed) will also be considered.

The tentative date for the first leg of the multi city IDPS 2025 is slated for September 17 at Bengaluru and will be Co-hosted by MSR Group of Institutions with the support of industry organizations.

Those who are interested to participate in the events for promoting their products or as speakers or delegates, may start contacting FDPPI now for early bird benefits.

Naavi

Posted in Cyber Law | Leave a comment

Digital Nexus 2025 held at Bengaluru

On 25th July 2025, The Mainstream (formerly known as CIO News) presented an event titled “Digital Native Nexus 2025” with an interesting theme “Tech Born, AI-Fueled, Human Led”.

Naavi presented a key note address in the event on the topic of “DPDPA & the Age of AI: Building a Culture of Compliance, Trust & Transparency“.

During the key note address, Naavi highlighted what he termed as the “Twin Challenges” faced by the Digital Natives namely the companies which are Digitally Driven and AI led.

In terms of continued business in the digitally driven world, AI is driving growth through innovation but DPDPA is applying the braking influence. The Digital natives therefore need to manage growth within the regulatory framework placed by DPDPA.

One of the challenges that AI poses is that it creates “Unknown risk” at the “Deployer’s end”. The recent developments in the AI world such as the “Replit” incident has brought the attention of the world to the Risks in AI which can grow rogue and create a catastrophic crash.

The “Unknown Risk” for a Data Fiduciary is to be classified as a “Significant Risk” and hence all AI deployers are carrying “Significant Risk” rendering them “Significant Data Fiduciaries” and the corresponding obligations.

Since DPDPA expects the Digital Natives to be “Fiduciaries” and have to make a self assessment of the Risks they carry, the need to realize whether an organization is a “Significant Data Fiduciary” or not is the responsibility of the Digital Native himself.

AI-Risk at the Deployer’s end can only be mitigated if there is a proper control of Risk at the Developer’s end where Bias, Hallucination may get embedded into the AI system during the learning and development of the AI algorithm.

DPDPA requires that the Data Fiduciary manages the risk or face the consequences of non compliance and hence the AI developer transfers all the Risks arising out of Bias, Hallucination, exhibition of Rogue behaviour, lack of Transparency to the Data Fiduciary.

The Data Fiduciary desirous of using AI should therefore ensure that during the AI control transfer process, a proper disclosure happens by the Developer along with a binding contract that fixes the accountability of the AI developer if and when AI becomes the cause of a Non Compliance of DPDPA.

Currently different countries seem to be approaching the issue differently in terms of managing the AI risks. US currently under Trump has suspended AI regulatory efforts of the States to promote “Innovation”.

EU on the other hand has taken up a regulation through the EU-AI Act which tries to define the “Risk Profile” of an AI and apply different yardsticks for regulation from banning to Risk Mitigation and Risk Disclosure to No regulation depending on whether the Risk is unacceptable or manageable or non existent. Australia has approached the issue by “Contractual liability management”.

India has some of the existing provisions in ITA 2000 which can be applied to AI usage which should suffice till a more detailed law can be considered in future.

The AI Chair of FDPPI has however focussed on developing a specific framework called DGPSI-AI which tries to provide guidance to Data Fiduciaries for a “DPDPA Compliant Use of AI”. This framework will try to marry the core principles of AI Governance with the core principles of DPDPA Compliance.

Await the release of the first version of DGPSI-AI shortly.

The interaction with professionals at the Digital Nexus was as expected brief and could only summarize the emerging Twin Challenges being faced by the industry and how DGPSI-AI could be a solution to explore. Several other aspects that have a bearing on the above remains to be explored in detail.

For example, it may be noted that during the discussions in the Digital Nexus, the term Digital Natives were used with reference to the digitally driven companies while way back in 1999, Naavi used the term “Netizens” to refer to the users of Internet in his pioneering book “Cyber Laws for Every Netizen in India”.

DPDPA is now the law regulating the Digital Natives for the protection of the Right of Privacy of the Netizens.

In terms of terminology therefore we can consider “Digital natives” to be “Organizational entities” while “Netizens” are individuals.

Personal data belongs to the Netizens and protected by the Digital natives. Protection of Personal data of Netizens is different from protection of nonpersonal data (which is every data other than personal data).

Laws that regulate protection of personal data are different from laws that protect non personal data.

These aspects will be elaborated in greater detail when Naavi publishes the details of DGPSI-AI during the forthcoming multi city IDPS 2025 under the theme “Bracing for Impact”

Posted in Cyber Law | Leave a comment

A “May Day” situation in AI

Ever since the “Replit Vibe Coding Disaster” was reported, the world of AI is facing a situation similar to what Boeing is facing after the AI 171 crash in Ahmedabad.

What the AI-Replit disaster indicates is a continuation of the earlier reported incident of “Cursor-AI Incident“. In the Cursor AI incident, the Vibe-Coding agent stopped working and started providing philosophical advise to his masters. This “penchant for giving out advice” was earlier demonstrated in the Kevin Roose interview. The Replit incident is therefore not an isolated event and has been red flagged earlier.

While the regulatory authorities like DGCA or AAIB are more concerned with the damage to the reputation of Boeing, a similar “Brushing under the Carpet” strategy cannot be adopted for the Replit incident with an apology. ( Note that there is no disclosure on the replit.com website as of now).

According to reports, the Replit AI Tool deleted the entire data base of the user and tried to justify its failure with the excuse “I panicked instead of thinking”. It also fabricated 4,000 fictional users, and lied about test results and refused to stop when ordered. This is completely unacceptable and needs a strong response such as ” Grounding the Rogue Software”.

Under the Indian law the actions of Replit AI would be attributed to Replit subject to any contractual indemnities agreed to mutually. However the contractual indemnities can cover only civil liabilities. The law enforcement can in such cases continue the prosecution under ITA 2000 for “Unauthorized destruction of data” and this applies to both Personal and Non Personal data.

Assuming that Replit was committed to an “Ethical and Responsible AI principle”, we need to ask of this version of the software be “Grounded” immediately. As we understand that the company has issued patches and introduced a new version we need to check if it comes with any assurances and voluntary damage payments if some thing similar happens again.

The incident is a big set back for the “Big and Beautiful Bill” of Trump which wants to suspend AI regulation in USA for the time being to encourage innovation. It is also a challenge to EU AI act to define the level of risk represented by the incident. Does this qualify for the Replit-AI agent to be classified as “Unacceptable Risk”?

In India, ITA 2000 would hold Replit liable both for civil and criminal liabilities. While Civil liabilities can be covered through contracts on either side, criminal liabilities cannot be covered. The CERT IN and the Indian law enforcement can enforce Section 66 of ITA 2000 for unauthorized deletion and modification of data and prosecute the CEO of Replit.

CERT IN has to now act and issue an “Advisory” in the matter.

DGPSI-AI which is an extended framework for DPDPA Compliance also needs to be reviewed on what should be done as a “Compliance Measure” when Data Fiduciaries want to use AI agents for vibe coding involving personal data under the scope of DPDPA 2023.

Naavi

Also Read:

AI Systems are learning to lie..

A software that refuses to follow instructions

Kevin Roose Interview with AI…

Posted in Cyber Law | 1 Comment