AI cannot Say “Yes” when it does not know

The AI models are not capable of saying “I Don’t Know” unless they are prompted specifically to admit. This is one of the reasons that when challenged, they hallucinate in situations where exact answers are required. Creative answering may be acceptable when the AI is writing a poem or a novel and not when it is answering a question based on which some critical decisions are to be made.

This is the prominent  reason why AI gives rogue responses. 

AI systems donot know or understand the way humans do. They just predict based on the back of information that it has. 

The lack of “Self Awareness” of what it knows and what it does not know and the discretion what it should say and what it should not pushes the AI to say some thing to complete the response.

An architecture that is designed always to produce the next word and not fail makes it necessary for AI systems to avoid “I don’t  know” responses.

We often hear Alexa saying “I don’t Know” but not a Chat GPT, Deep Seek or other LLMs. This lack of humility is an AI risk that generates wrong answers and makes an AI unpredictable. 

When the user is persistent, an AI may branch off into a conversation mode like a semi conscious hypnotic state and start disclosing information which it is not expected to disclose.

This is the forensic technique of “Narco Analysis of an AI” which is being discussed today in greater detail by Naavi in a webinar.

Those interested in being introduced to this “Theory of Hypnosis of an AI Model” for further exploration are invited  to attend the webinar by registration at the following link.

REGISTER HERE

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Privacy. Bookmark the permalink.