Author Archives: Vijayashankar Na

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance

Power of State Government to make laws for Electronic Documents

  Consequent to the new Gaming Act passed by the Government of India, there is a pressure from the Gaming companies to persuade the State Governments to frame their own laws so that in the case against the Central law, … Continue reading

Posted in Privacy | Leave a comment

Do AI models hallucinate 80% of the time?

The growing incidents of AI Models going crazy with what I call as “Going rogue” and what others call as  “Hallucinations” has raised an alarm in the AI user industry. For the developers, it is easy to say that “Hallucinations” … Continue reading

Posted in Privacy | Leave a comment

Exploring the Reasons why AI Models hallucinate

As a follow up of the earlier article, I received an interesting response from Ms Swarna Latha Madalla sharing her thoughts. Her views are as follows: Quote: Thank you for raising these very important questions. I am Swarnalatha Madalla, founder … Continue reading

Posted in Privacy | Leave a comment

How Good is FDPPI Training Curriculum?

Recently, Naavi asked an AI model to evaluate a two day training program designed for Bankers. Following was the comparison provided. The program was consistently rated better than the industry leading program as per the AI model. The model went … Continue reading

Posted in Privacy | Leave a comment

Has MeitY factored AI Risks in Section 70 protected Systems?

Several Banking systems in India have been declared as “Protected Systems” under Section 70.  With such a declaration, CERT In becomes an oversight agency for the information security in such Banks. We have highlighted the AI risks such as hallucinations … Continue reading

Posted in Privacy | Leave a comment

What Triggers Hallucinations in an AI model

“Hallucination” in the context of an AI refers to the generation of responses which are “Imaginary”. When an AI model is asked a query, its output should be based on its past training read along with the current context. If … Continue reading

Posted in Privacy | Leave a comment