How Does AI simulate humans in forgetting?

When an AI algorithm is trained, there is a dilemma that we need to address. Normally a Computer is expected to have 100% accurate memory of data that has been stored inside. Human brain however functions with its own infirmities one of which is a tendency to have uncertain memory. The normal human memory has a “Fading” nature where memory fades with time. There are however exceptions where some memories particularly with a high emotional significance tend to remain fresher than others. Similarly perceptions with multiple senses tend to remain in memory longer.

In order to ensure that an AI algorithm is efficient, the output has to be similar to human behaviour and hence one school of thought is that this trait of “Memory Fading” needs to be factored into the behaviour of an Algorithm. At the same time, many may argue why make the system which is naturally a memory efficient system to degrade itself.

In areas where humans build “Muscle Memory” over years of training, AI led Robots can be programmed instantly and it would be worthwhile to do so. if the robots are required to remember the instructions permanently. However, in some applications where there is a need for the output to be more human, it would be better if the output is tempered with the time value of data so that older data has less weightage than recent data. One such situation is “Valuation of Personal Data”.

In suggesting a “Personal Data Valuation System” under Data Valuation Standard of India (DVSI) we were struggling to accommodate the formula for valuing data as it ages. Now AI may be able to find a solution to this complex problem of “Personal Data Valuation”.

In personal data domain, the value depends on its utility and hence if the consent of the data principal is restricted in time, it should automatically reflect in the value of the data. The data may have to follow a linear degradation and a wipe out after the end of the consent period. If the data is archived under legitimate use, the value may drop from the “Utility phase” to the “Archival Phase”.

Currently Machine Learning specialists speak of different techniques such as the following to incorporate the differential weightage of data value for learning process.

1. Time Decay Weighting

    • Exponential or Linear Decay: Assigns weights to samples based on how recently they were recorded, with more recent data points given higher weights.

    • This approach is commonly used in recommender systems, time series models, and search algorithms to ensure the model adapts quickly to recent trends

2. Decay-Weighted Loss Functions

    • The loss function during training incorporates weights for each data instance based on its age. Recent samples contribute more to the loss, guiding the model to learn primarily from the most up-to-date information

    • Example: Adaptive Decay-Weighted ARMA, a method for time series forecasting, modifies the loss function with a decay weighting function so that the influence of observations decays with age

3. Sample Weighting or Instance Weighting

    • Most machine learning libraries allow you to specify sample weights when training models. By assigning larger weights to recent data, algorithms like gradient boosting, neural networks, or linear regression can be skewed to prioritize fresh inputs

    • This approach is algorithm-agnostic and is especially practical for datasets where age can be explicitly measured or timestamped.

4. Age-of-Information (AoI) Weighting in Federated & Distributed Learning

    • In distributed or federated learning, gradients or updates from devices with fresher data are weighted more heavily. One example: age-weighted FedSGD uses a weighting factor reflecting the recency of data (Age-of-Information), which helps achieve faster convergence and improved performance in non-IID (non-identically distributed) scenarios

    • The technique calculates and applies an “age” metric for each device/data shard, promoting those that just contributed fresh samples.

5. Rolling Windows & Sliding Windows

    • Instead of weighting, some systems simply drop older data altogether and retrain or update the model using only data from a recent rolling window. This method indirectly links the model’s knowledge only to recent history.

When it comes to valuation of utility of personal data, the impact of data protection laws which link the utility to the “Consent” of the data principal need to be incorporated into the valuation module. Hence Machine Learning specialists need to discover newer algorithms which ingest a basic utility value moderated by aging with a link to the consent period. It should also incorporate a lower utility level during the legitimate use period post consent period when the personal data moves from the active storage to archival or is anonymised and moved to research data store.

A similar consideration of valuation of personal data will also arise when the Regulatory Authorities determine the level of penalty for data loss as was recently reported in South Korea where penalties were imposed on some educational institutions for loss of data which was 20-40 years old. Whether the penalty was reasonable in this context or not remains a debatable issue. When the Indian DPB is confronted with similar issues there is a need to develop an AI algorithm that would determine a “Reasonable Penalty” for the failure of “Reasonable Security”.

AI Chair of FDPPI invites AI researchers to develop an appropriate model for making the penalty decision less subjective by recommending a proper system that evaluates what is the value of data lost in a data breach situation.

Naavi

Posted in Cyber Law | Leave a comment

The positive impact of Replit incident and Counselling Psychology prospects

The recent incident of Replit AI agent going rogue and the earlier Cursor AI incident clearly indicated that “AI for Vibe Coding” comes with its own Risk.

These incidents highlighted that AI Agents cannot be fully relied upon for coding functions and need manual supervision. In a way this gave an assurance that all human jobs are not likely to be taken over by AI.

The “Rogue Risk” of AI is part of the “Hallucination” effect and the hallucination effect itself is tied to the “Creative” character inbuilt in algorithm decision making. Hence it cannot be easily eliminated though the AI developers need to continue their efforts in this direction.

In the Replit incident, it appears that the “Kill Switch” was either non-existent or failed. This is a red-flag for the management and for the immediate future, human oversight for every AI algorithm and introduction of an internal sand box protection becomes essential for using AI for coding.

For this reason, these incidents of Cyber Security failure should have a positive effect for reducing the gloom and fear created by the recent reports of large job losses attributed to AI.

Yesterday, I was participating in a TV debate on “Job Losses due to AI” particularly in Bengaluru in which the prospect of over 25000 to 1 lakh jobs being lost in Bengaluru in the next year and its impact on the society came for a brief discussion.

Most of the tech supporters try to rationalize the situation comparing it with earlier occasions when major technical developments disrupted the market but quickly stabilized often for the better. The introduction of the Computers in business such as Banking itself is an example that stands out where Computerization did not affect the employment adversely but rather contributed to an increase in manpower requirement.

It is however necessary for us to realize that the issue of “AI replacing humans” is a much more complex situation than Computerization and the prospects of AI Agents and industrial Robots replacing man for man in the Job market is real and threatening.

While in the long run, new jobs do get created because the economy itself progresses the existing generation of employees particularly those who are coming out of colleges now will find it hard to meet the challenge. By the time they re-skill themselves they will be one year old graduates and depreciate in value naturally.

Many of us dismiss the problem by saying that the current work force need to “Up-Skill”. But “Up-skilling” is different from “Re-skilling” and meeting the challenge of AI in Job market may require more of “Re-skilling” than “Upskilling”.

In Upskilling a person is re-trained in his own basic functional capability like an accountant used to manual book keeping being trained on the use of software. This happened with computerization in the industry because computer came in as a tool and there was a need for the human to operate the computer. But now an AI agent can replace three humans and we donot need one human to supervise. Hence Upskilling alone cannot save the day.

“Re-skilling” some times is impossible for the same generation and it happens over a longer time span. For example, one industrial robot can replace 10 industrial workers today. The development of the industrial robot itself may require a manufacturing unit in which 100 workers may be required. But the 10machine operators who lose their job today in a Car manufacturing facility may not be able to re-skill themselves as workers in a Robot Manufacturing company or a company which produces electronic components that make an industrial robot.

This situation could mean that in a time span of three to four years the number of jobs created may be more than current job losses but new employees may replace the current employees and the current employees may have to either degrade themselves or become job less without a replacement occupation.

The society has to now prepare itself to meet this situation and enable people understand and accept degradation voluntarily. If not some of them will become Cyber Criminals and some will commit suicide.

The “UP-Skilling” and “Re-skilling” efforts should therefore be augmented with the “Reinforcement for Voluntary Degradation”.

Up-skilling should be a responsibility that the industry should take up. Re-skilling is the responsibility that the educational institutions need to take up.

“Reinforcement of Voluntary Degradation” is a “Counselling Service” which psychological therapists need to take up.

At the AI-Chair of FDPPI, the need to study the impact of AI teaching on Brain Development in Children has been flagged. Now the need to stimulate practitioners of psychology and more particularly the “Counselling Technology”.

Counselling Psychologists help individuals to cope with life’s challenges, stress, Crises etc.

Naavi

Posted in Cyber Law | Leave a comment

Gate Keepers under EU Data User Act

The EU Data Act effective from 12th September 2025 has a unique provision under Article 3. This article states

3. Any undertaking designated as a gatekeeper, pursuant to Article 3 of Regulation (EU) 2022/1925, shall not be an eligible third party under this Article and therefore shall not:

(a) solicit or commercially incentivise a user in any manner, including by providing monetary or any other compensation, to make data available to one of its services that the user has obtained pursuant to a request under Article 4(1);

(b) solicit or commercially incentivise a user to request the data holder to make data available to one of its services pursuant to paragraph 1 of this Article;

(c) receive data from a user that the user has obtained pursuant to a request under Article 4(1).

The definition of “Gate keeper” is picked from the Digital Market Act and applies to large online platforms having significant impact. It has turnover and reach criteria. In September 2023, EU designated six entities as Gatekeepers which are Google, Meta, Microsoft, Amazon, Apple and Tiktok.

Hence the above provision of the Act applies to these six entities.

Note that these entities or not expected to “Commercially incentivise a user in any manner” to make data available for use. This also means that in respect of personal data, they cannot obtain a valid “Consent” for such data.

Data which comes under the scope of DMA is the data generated by an online platform and includes the profiling data generated by these entities. It is inclusive of the “Transactional data” on which the entity may have a legal right.

These provisions may mean that data such as generated by Google Maps may not be available for sharing by Google. In principle this sort of provision is part of the “Completion Act” in India and could affect many other “Platforms” having significant presence. It is to be seen if this could affect the web scrapping services used with permission on platforms such as X or Linked-In.

In India Competition Act is the relevant act which could define the large platforms as “Dominant Companies” and Competition Commission of India has the right to review market situation from time to time and declare the dominant status of companies.

Now, the Competition Commission of India (CCI) needs to take a look at the combined effect of EU Data Use Act and DMA and review if it has to have similar provisions in India when DPDPA 2023 is notified.

Naavi

Posted in Cyber Law | Leave a comment

CERT IN issues Cyber Security Audit Guidelines

In a welcome move, on July 25, 2025, CERT IN has provided a comprehensive Cyber Security Audit Guideline which should be a preferred audit guideline for ISMS audits in India.

CERT In derives its statutory authority from ITA 2000 and hence this guideline contributes to ITA 2000 compliance and does not stop at only being an industry best practice.

In April 2011, MeitY had issued the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011

Under Para 8(1) it had been stated …

A body corporate or a person on its behalf shall be considered to have complied with reasonable security practices and procedures, if they have implemented such security practices and standards and have a comprehensive documented information security programme and information security policies that contain managerial, technical, operational and physical security control measures that are commensurate with the information assets being protected with the nature of business. In the event of an information security breach, the body corporate or a person on its behalf shall be required to demonstrate, as and when called upon to do so by the agency mandated under the law, that they have implemented security control measures as per their documented information security programme and information security policies.

This was followed by the second paragraph stating

The international Standard IS/ISO/IEC 27001 on “Information Technology – Security Techniques – Information Security Management System – Requirements” is one such standard referred to in sub-rule (1).

Upon clarifications sought, the MeitY had responded to an RTI query as follows

With the notification of the Comprehensive Cyber Security Guidelines by CERT-In under the powers bestowed on it under Section 70(B) of the Act, Section 43A rules of April2011 gets automatically amended.

We however request MeitY to issue an advisory in this respect indicating

“Comprehensive Cyber Security Guidelines dated 25th July 2025 issued by CERT-In shall be one such standard referred to in sub-rule (1)”

This guideline is applicable to all organizations in India using IT including private sector companies and is to be considered as binding for ITA 2000 compliance.

By virtue of a direct link between Section 43A and DPDPA, the guidelines may also be considered as a guideline under Rule 6 (g) of the Draft DPDPA Rules for “Reasonable Security for safe guarding personal Information under DPDPA”.

For the CERT-In empanelled auditors, adoption of this framework is considered mandatory for their ITA 2000 compliance audits.

At FDPPI we adopt this as a guideline for application of “Reasonable Security Practices for the Personal Data protection under Section 8 of DPDPA 2023”. This would be part of the DGPSI framework.

I request all DGPSI auditors to immediately adopt this framework as part of the DGPSI framework under MIS 15 and 16 of DGPSI-Lite or MIS 15, MIS 47 of DGPSI-Full version.

Also refer: https://www.cert-in.org.in/PDF/CyberSecurityAuditbaseline.pdf

Naavi

Posted in Cyber Law | Leave a comment

Insurance Company tries to avoid claim by Privacy Infringement

A Case has been reported from Surat where a health Insurance Company 9Go Digit General Insurance) has tried to deny a claim of an insured for the reason that his “Google Time line” did not indicate his presence in the hospital.

It should be appreciated that the Consumer forum of Valsad has however provided relief to the insured.

However, the incident has thrown some questions on how did the insurance company accessed the Google Time line and what was the reliability of the Google Time line.

First of all it was foolish for the Insurance company to have relied on the Google Time line since the person could have simply left the mobile at his residence and not taken it to the hospital.

Secondly it is plausible that the Insurance investigators accessed the mobile of the claimant without consent and extracted the data. This is “Unauthorized Access” to data and an offence under Section 66 of ITA 2000. I suppose the Police would file an appropriate case against the company and proceed against the persons responsible invoking Section 85 of ITA 2000 also.

Naavi

Posted in Cyber Law | Leave a comment

How AI affects Brain Development

The AI Chair of FDPPI is trying to conduct a study of how the teaching of AI in children alters the development of the brain. It is a hypothesis that human brain re-wires itself if any part of the brain is not used fully. It is for this reason that this generation has been losing memory as is evident in our inability to remember phone numbers because it is readily available in the mobile address book or lose the visual mapping of the city because we depend on the Google Maps.

With the use of AI, we are entering a new phase where certain functionalities of the brain are getting adversely affected even in adults who use AI extensively in job situations. As per a report in Indian Express Dr Prabash Prabhakaran, a medical practitioner in Chennai (Senior Consultant and Director of neurology, SIMS Hospital) has reported that a case was observed where a software professional reported that she felt “mentally lazy and lost the curiosity to learn and do things herself rather than finding some body else to do it”.

Dr Prabash attributes this to AI over use and continuous outsourcing of our ability to think, remember and make decisions on our own.

Naavi has been pointing out this for last several years and has even suggested that India needs a “Neuro Rights Law” that limits the ability of Computer Influence on the Neural system of humans. An attempt is also being made to take a study of how the development of brains in children gets affected as we teach AI to them in the early years of their mental development. The views of Dr Prabash are validating this hypothesis that AI use as well as any other Computer interface that alters the behaviour of human brain may leave long term impact on the human and hence needs to be regulated.

While regulation of neuro rights or the use of dark patterns or AI over use are matters of law to be tackled later, Dr Prabash suggests the following remedies to be followed .

  1. Intentional recall: Before searching, take a moment to try and recall
  2. Active Participation: Dn’t repace your ideas with AI; Use it to test them
  3. Mental Exercises: Include deep reading, crossword puzzles and logic games
  4. Tech sabbaticals: unplug frequently to allow your thoughts to roam

We should thank Dr Prabash to have highlighted this aspect of AI usage.

Naavi

Posted in Cyber Law | Leave a comment