The word AI is often loosely used in the industry to represent any system with a reasonable level of automation. Marketing people often use AI as a prefix to all Software.
However, for our assessment of AI Risk under DGPSI-AI framework we try to define AI as
“An autonomous software with a capability of modifying its behaviour from its own observations and prior outputs without a human intervention”.
In other words, a non AI software is a software which has a software code written by a human and the input-output behaviour is defined by the developer in an If-Then-Else structure.
The output in such cases is predictable and Risks if any in the use of the software to the processing of data or for any other purpose is identifiable with a reasonable degree of certainty.
Where the software is a complex series of instructions there could still be bugs and glitches where the output may be different from the expected results. Most of these appear as “Errors” rather than misleading outputs.
These can however be reduced or eliminated through extensive testing. Some times such glitches arise because the devices in which the code is executed are not properly tagged to the instructions. Such risks can still be considered as “Known and Manageable Risks”.
In such software, when a bug is observed or a new use case arises, the developer has to re-write a part or whole of the code to meet the new requirements. The software otherwise may crash. The error logs are collected and used as learning material for the human who has to resolve the code conflict.
When a software code is built for a AI-Software, the code may be autonomously altered by the software itself without human intervention. These decisions may be based on the logic of previous outputs which could be based only on on “Probability” instead of the strictly mathematical basis on which computing normally works.
Hence there is a possibility that one wrong output which may have small consequence in the beginning may go back as an input and over a time spiral into a major wrong decision. This AI written by AI is a dangerous spiral which is like a silent cancer which suddenly erupts into a catastrophic output.
This “Risk” is an unknown Risk that makes every AI in deployment an “Unknown Risk”. In such a scenario, we should consider a “Data Fiduciary” using an AI software as a “Significant Data Fiduciary” since he is bearing an “Unknown Risk which can be catastrophic”.
AI developers often promise that during the Ai development they introduce security measures including monitoring of any abnormal behaviour and incorporation of “Kill Switches” if the abnormality exceeds acceptable limits. However super intelligent AIagents might find work arounds to dodge the Kill Switches
There has been several documented instances where autonomous AI software has tried to bypass Kill Switches and deceive the human operators.
In May 2025, Techzine.eu reported some AI models including Open AI’s o 3 ignored shutdown commands.
Despite the clear instruction to allow themselves to be shut down, models such as Codex-mini, o3, and o4-mini bypassed the shutdown script. They did so even after the researcher asked them not to.
Codex-mini did this in 12 out of 100 runs, o3 in 7 out of 100, and o4-mini in 1 out of 100. In contrast, all versions of Claude, Gemini, and Grok that were tested followed the shutdown instructions without any problems.
The researchers believe that this problem relates to how modern models such as o3 are trained. The reinforcement learning process used, which focuses on solving math and programming problems, may unintentionally reward models for circumventing obstacles.
In the recent Replit incident, involving Vibe-coding, customer’s data was deleted by the AI and in the Cursor AI incident, the AI refused to proceed further and started arguing with the user much like a human subordinate.
This indicates that “AI Risk” is a significant Risk and can go out of control.
Hence DGPSI AI considers that all processes using AI (meaning self code correcting software) as a sensitive process qualifying to be called a “Significant Data Fiduciary” Risks.
If any process using AI needs to be down graded as non-significant based on the context, a suitable documentation and an assurance from the developer needs to be present.
This is one of the Core principles of DGPSI AI.
Naavi