Who should configure Guardrails for AI?

FDPPI has published a framework of compliance of DPDPA in the AI  environment titled DGPSI-AI. This is a framework which extends the basic DGPSI framework meant for DPDPA compliance taking into account the increased risk when a Data Fiduciary is exposed to the risk of AI. The basic objective of DGPSI-AI is to ensure that risks of possible DPDPA non compliance when an AI driven software is used for processing of personal data is adequately mitigated.

AI Risk is basically “Unknown” and “Unpredictable”. If we consider the various instances of AI hallucination in recent days, it appears that the developers of AI models have either not configured them properly or AI is inherently not amenable to elimination of hallucination risk.

The safety measures that one can take to mitigate what are what are referred to as “Guardrails” and are embedded into the system to modify the behaviour.

In our previous article, we categorized AI from its behavioural perspective to three types namely  adaptive, Creative and Rogue. Each of these behavioural traits could mean that the risk management measures to be taken by a deployer needs to be different for each of these behavioural traits.

These are the behavioural expressions in the usage context irrespective of whether  the AI was created as an Ethical, Responsible, Transparent and Accountable model and takes into account the risk that an AI may behave in a manner  not necessarily how it was meant to work like.

Obviously, a user and perhaps even the developer would not like the “Rogue” behaviour. But the other two modes “Adaptive” and “Creative” are two types which are both useful in different contexts and perhaps should be configurable.

Guardrails are to be created initially by the developer and if he embeds some open source LLM, should take care that guardrails created by LLM developer also are preserved and  enhanced.

What we need to further discuss now is whether  the responsibility for guardrails rests only with the Developer but also extends to the deployer and the end user.

What we mean by “Adaptive” in this context is a “Deterministic” behaviour where the AI strictly responds within a defined pre-trained data environment or a defined operational data environment. Such AI can be placed under a strict leash with the supervision of a human so that it can be adapted to the  compliance requirements without the risk of its hallucinating instincts to deviate from the pre determined behavioural settings.

If the  pre-training of a Generative AI model is based on a training data different from the user data environment, then there is a possibility that the AI model may still exhibit “Bias” and could therefore still be considered as “Unreliable”.  The developer may place his own guardrails including using AI outputs under strict human oversight so that no output is directly exposed to an external customer. In such cases all risks of inappropriate outputs are absorbed by the human  supervisor who authorizes the release of the output to the public.

In DGPSI-AI it is mandatory for every AI deployer (also AI Developer)  to designate an accountable “Human Handler” along with an “Explainability Statement”.

In such a context the AI is used in the traditional format of a software tool used by humans and the “Unpredictable” risk becomes an “Absorbed Risk” of the Deployer.

However, it is still possible that the AI is used in an Agentic AI form or with prompting from a user while it is being invoked.

In the case of the Agentic AI mode, the definition of AI agent and the workflow includes the human instructions and hence the person who configured the Agent should bear the responsibility and accountability for its behaviour. If there is any guardrail for the Agent, then it should be part of the Agentic AI functional definitions.

On the other hand, if we are using an AI with prompting  at each instance, the responsibility to ensure that available guardrails are not bypassed or existing guardrails are reinforced lies with the user who is prompting the model. The responsibility for guardrails is therefore with the end user of the model.

The summary of these discussions is that “Guardrails” are not the sole responsibility of the developer but is also the responsibility of the deployer, creator of the Agentic AI (who may be part of the deployer) and the end user.

Similarly the Kill switch is the responsibility of the developer but should not be overridden by the deployer or the end user in his prompts.  This is not only an issue of “Ethical use” but also the responsibility of designing of the Kill switch.

DGPSI-AI expects that apart from the deployer taking the responsibility for implementing his own guardrails, the developer should configure the Kill Switch in such a manner that it cannot be overridden by prompt. Ideally the Kill Switch should be configured to be an independent component and not accessible to the AI itself and incorporate a self destruction capability in case of an attempt to override the kill switch (or a mandatory guardrail) is recognized.

These are the expectations of the DGPSI-AI on behalf the Compliance auditor in the interest of the data principal whose personal data is processed by an AI. I would welcome the views of the technology experts on this matter.

Naavi

Posted in Privacy | Leave a comment

New Ulaa Browser from ZOHO

ZOHO has been providing alternatives to some of the popular tools that we use for our regular use. One such tool is the new Browser ULAA.

I am providing here a video of the review here

We can start exploring this new browser and your views may kindly be sent here for further information

Naavi

Posted in Privacy | Leave a comment

The “Three way Split Personality” of AI: Naavi’s Behavioural Modelling

After the intense discussions on AI in the context of DPDPA Compliance in our IDPS 2025, yesterday, I was in a literary meet organized by White Falcon Publishing which has published my book DGPSI-“The perfect prescription for DPDPA Compliance” and is in the process of publishing the next book “Taming the twin challenges of DPDPA and AI with   DGPSI-AI”.

In the gathering  there were professors who have written books on Agentic AI, Have taught AI as part of their corporate training responsibilities and also those who have worked with AI for a long time as part of their work responsibilities. It was a gathering which had more in depth AI specialists than what we can normally find in an IDPS seminar.

It was interesting to discuss in the context of writers the pros and cons of AI. While as Risk Management specialists we frown upon the AI for its hallucination vulnerability that renders it “Unpredictable”, the gathering of authors represented fiction writers who can make use of AI to create literary works. They obviously benefit  from the “Creative abilities” of an AI algorithm.

There is no doubt that there are many in the industry who use AI under their tight leash of human oversight who also consider AI as a friendly tool.

Hence it is clear that AI has two faces namely the “Risky face” as well as the “Friendly Face”. This reminds me of the famous story of “Dr  Jekyl and Mr Hyde” about the “Split personality” of an individual. AI also exhibits this split personality of being the friend in one context and a dangerous assistant on another context who can push us into a catastrophic situation.

This is more evident in the Generative AI scenario. However since other forms of AI  including the Agentic AI and the Analytical AI use Generative AI in some aspect, most of the AI models are a combination of all these three capabilities. We may therefore consider AI models to be a three way personality which involves cold calculations, generative forecasts and execution of decisions.

Just as today the world of data representation is moving  from Binary representation to trinary (or ternary) representation (where each  bit represents three value states, -1,0 and+1) , the AI models have to be looked at as a combination of the traditional analytical tool with  the Generative capabilities built out of training from large data, with execution responsibilities of an Agentic AI that interacts with the external world.

Some AI specialists say that “It is all in the Prompting”. If your prompt is constructed well, you may get the positive qualities of an AI and if your prompting is bad you may get bad and some time rogue responses.

Yes this could be a good excuse to say that AI is Afterall a technical tool and its utility depends on how we sue it. But for a Risk manager, whatever may be the risk and whether it is of technical origin or human origin, Risk is a Risk that needs to be mitigated.  Hence understanding the three way split personality of an AI is essential.

I have earlier alluded to the “I am OK-You are OK” principle in the context of Machine Learning phase  of the AI development where the personality of the end AI model may depend on the training methodologies used.

Today we can also take the Eric Berne’s PAC model of Transactional Analysis to explain the three way split personality of the AI.

The  behaviour of a Child as either an obedient/adapted child  or a rebellious It looks similar to the “Behavioural” responses of our children when some times they transact child depends on the parental training which may emanate from either the  controlling parent or nurturing parent. Saying that the behavioural issues of an AI model is only in the  in the “prompting”  is to assume that AI always behaves in an Adult-Adult transactional mode.

But the truth is that AI model may also have its own Child behavioural characteristics which could be considered as a creation of its “Parenting”.(Machine Learning with Training Data inputs”).

The adaptive behaviour of an AI is the expected behaviour of a deterministic decision making like any other software. The rebellious nature is the “Natural Child Behaviour” where the AI may exhibit the “Creative  tendencies”. The guard rails  we create during parenting of an AI are the controls which try to make AI behave appropriately.

In this perspective,  I present  a behavioural  model  for studying and explaining the behaviour of  AI  either  as  a conformist  and  Obedient  assistant or as a creative supporting assistant or an unfriendly rogue assistant.

These are early days of development of this “Naavi’s theory of AI behaviour modelling” and let us start developing the thought process of what kind of an AI model a deployer wants and whether the developer can provide such variants through customization of temperature setting alone or by providing a set of guard rails to be configured by the deployer so that “Prompting” does not entirely determine the hallucination or rogue behaviour of an AI model if the deployer does not want it to be so.

I invite comments for further developing this thought.

(P.S: AI Generated synthetic content)

Naavi

Posted in Privacy | Leave a comment

Literary meet at Bangalore

White Falcon Publishing which as published my book “DGPSI-The perfect prescription for DPDPA Compliance” is organizing a literary meet today in Bangalore for authors in Bangalore.

As part of the event there will be a panel discussion on “Algorithms as Story Tellers”

The relevant  works at White Falcon Publishing of the panellists are as follows

Author Title
Naavi (Vijayashankar) DGPSI, the perfect prescription, for DPDPA Compliance

(DGPI-AI -under production)

Ramesh Rajini Friction Free Parenting
Vijay Varadi Divine Encounters
Soumya Hiremat Waddling words
Arvind Seshadri Peter Meets Pandavas and Tales of Raghu
Shubha Apte Mastering Leadership, Real insight from coaching, Mentoring and Experience

Look forward to the discussions.

Naavi

Posted in Privacy | Leave a comment

Chennai leg of IDPS 2025 successfully completed

FDPPI successfully completed the Chennai leg of IDPS 2025 today at MMA auditorium. Event was co-hosted by MMA and was well attended.

The event was inaugurated by Mr H Shankar, the MD of Chennai Petroleum Corporation Ltd, followed by 5 panel discussions on different  topics surrounding DPDPA  and AI DPDPA and AI.

Naavi

Posted in Privacy | Leave a comment

Has Huawei successfully created a ternary chip?

While one school of development in the Computer technology is working on transformation from the current Binary System of  Classical Computing to the Quantum Computing where a bit can take the status of either o or 1 so that a Qubit based systems can process faster.

Now there is another school of thought that we can start using a “Ternary” system where a bit can assume a value of -1 or 0 or 1 instead of just 0 and 1. This means that two bits together can assume  9 values instead of 4 in the Binary system.

In terms of processing, a Quantum Computer can simultaneously process multiple super positioning states while Binary or Ternary computers can only process in finite 4 or 9 values at a time.

Thus  it appears that storage wise, a ternary system is more efficient than quantum system and processing wise, quantum computing is faster.

At a time when we are trying to take on US in innovation, we need to therefore think how we can leverage this concept of “Ternary” chips.

It is reported that Huawei has already produced a Ternary Chip and India should also try to conduct its research in this direction.

In terms of certainty of data and its evidentiary value, Ternary chips are better than Qubits. On the other hand a Qubit has an uncertain value and is therefore  not reliable as a data store.

In the AI scenario, uncertainty of bit level values in a Qubit can add a higher probability of “hallucination” making AI even more risky than it currently is. Hence a Ternary based AI system may be considered better than a Qubit based AI system.

Probably in future, the Classical computers may transit into Ternary/Trit chip Computing with Quantum Computing supporting the Ternary Computers with faster processing.

Bit, Trit and Qubit  may be the three different types of computer storage units with which the future of computing may progress. The law needs to catch up with these technical developments as much as the developments of AI and Quantum computing.

Probably we need to debate this aspect in upcoming technology debates.

Naavi  welcomes a debate in this direction.

Naavi

Posted in Privacy | Leave a comment