For the Attention of the members of the India AI Guidelines Committee
The goal of India’s AI Governance Framework is stated (Refer para 2.1 of the report) as “To promote innovation, adoption, diffusion and advancement of AI” while mitigating risks to the society.
This is the Trump model of AI regulation where he wanted no regulation for the next 10 years which was cancelled by the US senate. Today there are many states in USA which have their AI regulations similar to the EU-AI act. These regulations are meant to recognize the “Risk” of using AI and try to regulate mainly the “AI Developers” to adopt a “Ethical, Responsible and Accountable Governance Framework” so that the harm that may be caused to the society through the AI is mitigated.
The Indian guidelines has however not made the “Safety of the society” as central to the guideline though one of the declared Sutras is “People Centric Design, human oversight and human empowerment”.
The para 2.1 could have been better stated as “To promote innovation, adoption, diffusion and advancement of AI” without placing the society at an unreasonable risk.
Under DGPSI-AI the first principle is “Unknown Risk is Significant Risk”. Since AI Risk is largely unknown, all AI usage should lead to it being considered as “Significant Risk” and all Data Fiduciaries who deploy AI should be considered as “Significant Data Fiduciaries” under DPDPA 2023.
The AI Governance Guideline makes reference to risks under page 9 of the report where it states as follows:
“Mitigating the risks of AI to individuals and society is a key pillar of the governance framework. In general, the risks of AI include malicious use (e.g. misrepresentation through deepfakes), algorithmic discrimination, lack of transparency, systemic risks and threats to national security. These risks are either created or exacerbated by AI. An India-specific risk assessment framework, based on empirical evidence of harm, is critical. Further, industry-led compliance efforts and a combination of different accountability models are useful to mitigate harm.”
The above para is conspicuous by ignoring to point out the real AI Risk which is “Hallucination” and “Rogue behaviour” of an AI model. Since all AI deployments involve use of LLM at some base level, the risk of hallucination pervades all AI tools and creates an “Unknown Risk”. While the above para recognizes “Impersonation” through deepfake and “Bias”, the word “Systemic Risks” need to be expanded to represent “Hallucination Risk” where the algorithm behaves in a manner that it was not intended to.
Examples of such hallucination include the Replit incident, The Cursor AI incident and the Deepseek Incident all of which were recorded in India. The Committee does not seem to think this as a risk and restricts it’s vision to “Deep Fake”.
Hence the Committee also ignores the need for guard rails as mandatory security requirements and proper kill switch to stop a rogue behaviour of the robot. When AI is deployed in a humanoid robot form or industrial robo form the physical power of the robotic body introduces a high level of physical risks to the users at large. The could be the Chess player whose fingers were broken by a robot or an autonomous car that may ram into a crowd.
The DGPSI -AI addresses these risks through its implementation specifications.
A Brief summary of the implementation specifications under DDGPSI-AI related to the risks are given below.
The Deployer of an AI shall take all such measures that are essential to ensure that the AI does not harm the society at large. In particular the following documentation of assurances from the licensor is recommended.
1.The AI comes with a tamper-proof Kill switch.
2.In the case of Humanoid Robots and industrial robots, the Kill Switch shall be controlled separately from the intelligence imparted to the device so that the device intelligence cannot take over the operation of the Kill Switch.
3.Where the kill switch is attempted to be accessed by the device without human intervention, a self destruction instruction shall be built in.
4.Cyborgs and Sentient algorithms are a risk to the society and shall be classified as Critical risks and regulated more strictly than other AI, through an express approval at the highest management level in the data fiduciary.
5.Data used for learning and modification of future decisions of the AI shall be imparted a time sensitive weightage with a “Fading memory” parameter assigned to the age of the observation.
6. Ensure that there are sufficient disclosures to the data principals about the AI risk
Additionally, DGPSI-AI prescribes
- The deployer of an AI software in the capacity of a Data Fiduciary shall document a Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI process.
- Where the data fiduciary in its prudent evaluation considers that the sensitivity of the “Unknown Risk” in the given process is not likely to cause significant harm to the data principals, it shall create a “AI-Deviation Justification Document” and opt not to implement the “Significant Data Fiduciary” obligations solely as a reason of using AI in the process.
- The deployer shall collect an authenticated “Explainability” document from the developer as part of the licensing contract indicating the manner in which the AI functions in the processing of personal data and the likely harm it may cause to the data principals
There are more such specifications in DGPSI-AI. Some of the additional specifications apply to the developer and for Agentic AI systems. (Details are available in the book.. “Taming the twin challenges of DPDPA and AI through DGPSI-AI”. (better read along with the earlier book.. DGPSI the Perfect prescription for DPDPA Compliance.
It may come as a surprise to the members of the Committee that this knowledge base exists and has been ignored by the committee. Many of members of MeitY are aware of and probably have copies of the books without probably realizing their relevance to the activities of the Committee.
Hope the Committee members will at least now understand that they have been working with deficient data either because it was not known to the research team (which knew all about international laws but not the work going on in India) or was deliberately kept away from the Committee.
There could be more comments that can be made on the recommendations but our interest was only to point out the bias in the data collected by the Committee for preparing its report and a deliberate attempt to suppress the work of DGPSI and DGPSI-AI as if it does not exist.
Want to re-iterate that if Meity wants, DGPSI and DGPSI-AI can be used as a DPDPA implementation standard to the exclusion of ISO 27701, ISO 42001 and plethora of ISO standards which some of the data fiduciaries may look at. Majority of data fiduciaries who are in the SME/MSME cadre cannot afford to get ISO audits and certifications since it would be expensive and redundant for their requirements. At best they will go through the ISO certification brokers who may offer them the certifications at less than Rs 5000/- to show case. This will create a false information with the Government that many are compliant though they have little understanding of what is compliance.
Even the next document which we are expecting from MeitY namely the DPDPA Rules will perhaps be published without taking into consideration the the available information on the web.
It is high time MeitY looks for knowledge from across the country when such reports are prepared.
Request MeitY to respect indigenous institutions not just in statements and adding “India” to the report but in spirit by recognizing who represents the national interest and who replicates the foreign work and adopts it as a money making proposition without original work.
Naavi






