AI industry needs to adopt Discipline

After the recent press reports about “Intermediaries” to be required to take permission of Meity for deploying Generative AI solutions in public platforms, there was a spate of knee jerk reactions from the industry.

One start up founder reacted …

“I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4 years full time bringing AI to this domain in India”

For one such positive contribution, we can show many negative contribution of AI. What will start up community say about “Gemini” having been deployed without proper tuning? Is it not the responsibility of the Government to put a check on “Irresponsible use of AI”?

Before jumping into to criticise the Government one should seek clarification. The clarification about the advisory of the Meity has now been issued and it is applicable only to “Significant Intermediaries” such as the Google,Instagram etc.

So, industry can relax that the prior permission is not for every use of AI in their activity. But if you want to place a “Frankenstein” or a “Potential Frankenstein” for unsuspecting public to be mislead, there is a need for regulation.

Naavi.org has always advocated “Accountability” for all AI before even talking of Responsible AI, Ai Ethics, Transparency, Explainability, Bias control etc.

I repeat, every AI developer and AI deployer should identify themselves in any of the outputs generated by the AI. For example, if I have created an AI application X using an underlying algorithm Y created by a company Z, my disclosure in the output should be

“This is generated by X on algorithm Y created by Z”.

What Meity has suggested is a move in this direction.

Naavi.org and FDPPI has suggested another innovative method for regulation which Meity or even an NGO like FDPPI can implement. It is the “Registration of an AI before release”. In this system, Z will register his algorithm Y (only ownership not the code) with the registrar and claim ownership like a Copyright/Patent. When the algorithm is licensed to X, the registration system can be invoked both by X or Z to record the user. Then the disclosure can only be a registration number which can be stenographically embedded in the output code such as the deep fake video.

This is the measure which we suggested as Ujvala innovation a few days back in these columns.

If AI is not regulated and accountability fixed, I anticipate that during this election time there will be a tool kit funded from George Soros to create chaos in the social media. We may have to shut down X , WhatsApp, Instagram and YouTube temporarily unless they put in place necessary controls. All these organizations are already considered “Significant Social Media Intermediaries” under ITA 2000 and also “Significant Data Fiduciaires” under DPDPA 2023 and there is legal framework for imposing discipline.

Genuine AI start ups need to realize that they have the responsibility  not start a fight with the Government for such regulatory measures.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.