The dangers of BYOAi

Every day we get some new development in the AI world. We are all enthusiastic about the potential of AI in increasing the productivity of our  organizations. Many SMEs/MSMEs and perhaps even the bigger organizations are restructuring their manpower  to use AI for reducing the costs. Some believe that AI agentic force can replace whole teams of employees for a given task.

The capability of AI is certainly visible in accomplishing some of our routine tasks in fraction of seconds.

However another risk which we are viewing is the tendency of some of the employees to jump the gun and start using AI tools for improving their personal productivity creating their own personal AI agents. Some employers may be encouraging this and some may not even be aware.

This BYOAi or Bring your own AI tendency which is some times referred to as Shadow AI is a new threat vector for organizations.

While we at  FDPPI are launching DGPSI-AI as an extended framework of DGPSI to assist organizations to mitigate the AI risk, it is necessary to first appreciate the extent of AI risks that is silently overcoming us.

In a recent compilation by an  AI enthusiast Mr Damien R Charlton, more than 358 legal cases involving AI hallucinations were tracked . This included 227 cases in USA, 28 each in Israel and Australia. At a  time when many were arguing that Courts can be replaced with AI and the AI tool is more honest than the dishonest Judiciary, the recent developments in observed hallucinations and rogue behaviour of AI have driven home a sense of caution.

A detailed analysis of these 358 cases is required to be attempted separately. But monetary sanctions have been indicated in many cases though the amount is only in thousands and not reached the millions and billions of dollars seen in GDPR and Competition Acts around the world. There have been public reprimand and warnings in most cases.

The highest penalty appeared to have been levied in the “Crypto open patent Alliance v Wright” amounting to 100,000 GBP stating “documents, which .. bore the stamp of having been written using an AI engine, contained a series of falsehoods,”

There were several other penalties such as  GBP 24727 imposed in the Bandla V Solicitors Regulation Authority  ,( UK High Court, 13th May 2025) , USD 31100 in USD in Lacey V State Farm General Insurance  (California District Court, 6th may 2025)  7925  In re Boy, Appellate, ( Court of Illinois, July 21, 2025) both for filing fabricated case laws.

These indicate that AI does lie and fabricate outputs and develops content which is not reliable in responsible usage. Hence placing reliance on  AI is extremely risky and replacing humans with AI an unwise move.

It is for this reason that DGPSI-AI considers AI risk is an “Unknown” risk and should be considered as Significant risk. All users of AI for personal data processing should be considered as “Significant Data Fiduciaries”. They need to designate DPOs, Do a DPIA and organize an annual data audit.

Considering these developments and unstoppable growth of AI, data auditors in India need to equip themselves with not only the knowledge of DPDPA but also of AI to some extent atleast to detect the use of AI and collect evidence of human oversight, possible hallucination etc. The data auditors need to also verify if any of the employees or their own use AI. In the Ethical declarations signed by employees, a disclosure of such usage should also be considered mandatory.

Naavi

 

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Privacy. Bookmark the permalink.