So far we have discussed four principles of DGPSI-AI, a framework for compliance of DPDPA by a AI Deployer. We will discuss the responsibilities of the Developer subsequently.
They are
a) “Unknown Risk is Significant Risk”
b) Accountability
c) Explainability
d)Responsibility
The principles are basically discussed in the AI Deployer’s scenario and are an extension of the DGPSI -Full framework.
To summarize, what we have discussed,
the principle of unknown risk is significant Risk suggests that an organization deploying AI should consider itself handling a “Significant Risk” and therefore the AI-Process should be considered as a “Significant Data Fiduciary” process requiring a “DPIA”, “DPO” and “Data Auditor” to meet the compliance requirements. The principle of “Accountability” extends the first principle by requiring designation of a “AI Handler” as a human responsible for the consequences of an AI. The “Explainability” principle further requires that a deployer documents the functioning of the AI process with respect to how the output is achieved. Since the functioning of the AI is determined by the developer who may hold back the code with himself, the fulfillment of “Explainability” obligation between the Data fiduciary and the data principal needs support of a proper contractual document between the Data Fiduciary and the supplier of the AI tool. The fourth principle of “Responsible deployment of AI” requires a justification document on the necessity and proportionality of the value addition that the Data Fiduciary intends to achieve in deploying the AI.
The next principle which we need to adopt as part of DGPSI-AI is the “Security”. In this context what is meant by security is that AI as an algorithm shall not create harm to the data principal whose data is processed. The classification system adopted by EU-AI is solely based on the “Anticipated Risk to the Data Principal”. The Risk that we need to recognize to the data principal is the potential physical harm if any, potential mental manipulation in terms of what we normally recognize as “Dark Patterns” and thirdly the deeper manipulation of the human brain which is part of the Neuro Rights regulation.
Physical harm is predominant when AI is used in robots both humanoid robots and industrial robots. Since humanoid robots are made of steel in most cases, the physical strength of the device is strong enough to create significant physical damage if the humanoid robot misbehaves.
We can recall how one Chess Robot crushed the finger of an opponent player who made a wrong move. Similarly there are instances of industrial robots dropping material on a worker to crush him to death , going rogue physically , the BINA 48 episode desiring a nuclear attack and taking over the world,.
Thus AI has to be secured both for physical security, digital security and Neuro Security. However given the fact that AI Risk is “UNKNOWN”, the management of physical security arising out of deployment of AI is also constrained by the unknown nature of the risk.
From the compliance point of view, the Data Fiduciary has to assume legal liability for the consequences, take appropriate assurances from the developer for successful testing at the developmental stage and hope so that he can claim “Reasonable Security”.
Identification and acknowledgement of physical risks, Dark Patterns and Neuro Manipulation Risk is considered part of the disclosure of a Privacy Notice involving AI usage under the DGPSI-AI principle. This is more like the “Statutory Warning” necessary but not sufficient. Hence it is augmented by a “Liability” admission clause supported by a suitable insurance for liability.
In other words, every AI algorithm shall be insured against causing any damages to the user either physically or mentally or neurologically. Watch out for a list of Implementation specifications further expanding on the principles.
Naavi