Several Banking systems in India have been declared as “Protected Systems” under Section 70. With such a declaration, CERT In becomes an oversight agency for the information security in such Banks.
We have highlighted the AI risks such as hallucinations in the AI Models which have been in wide usage in the industry. Probably Banks are also using AI directly or indirectly and are exposed to the “Hallucination Risks”.
In this context, we have tried to find logic for the DeepSeek incident reported in these columns to find out reasons for hallucination.
Some of the standard reasons quoted for hallucination are
1.Training Data Deficiency
2.Improper Model configuration
3. Knowledge gaps
4.Incorrect decoding
5. Ambiguous prompts
etc.
However, the Deep Seek response related to personal data of Indians being sold and money credited to some Cayman island account with HSBC, the bribing suggestions, the whistle blower silencing strategies donot fit into any known reasons.
I would like a research being conducted specifically on the Deep Seek responses to identify how the models are being built for such irresponsible behaviour.
It is time for us to question the Meity if they are aware of such AI related risks and whether any Government projects are potential victims to such risks. MeitY has declared many bank systems as “Protected Systems” and taken over the responsibility of security oversight in such Banks. Meity needs to clarify if they have taken steps to mitigate AI risks in such Banks.
Naavi