AI Advisory for Intermediaries

The advisory issued by Meity on 1st March 2024 has evoked concerns in the industry about a new Compliance requirement. Though a clarification was issued today by the Minister, there are still doubts about what compliance requirements are suggested by the advisory.

It is therefore necessary to analyze the advisory and its impact.

Before we go further, it must be stated that the suggestion made in the advisory is exactly same as FDPPI/Naavi has suggested as part of the compliance requirements related to AI usage.

Para 3 of the advisory states as follows:

In the article “AI sandbox required to prevent a new Toolkit of Fake news” released on 2nd march we had stated

This is exactly what the Advisory of March 4 from the Meity has done. We therefore welcome the advisory.

The advisory is under the Intermediary guidelines and hence it does not directly impose a punishment for non compliance. However in the event of occurrence of any adverse consequence on account of a AI derivative hosted by an intermediary, the intermediary will be liable under the law without the safe harbor protection as an intermediary.

The Advisory states that all intermediaries to which it is applicable need to ensure compliance within 15 day of the advisory which would be 15th March 2024. Applicability is restricted to where the usage is when it is “In such a manner that such information may be potentially used as misinformation or deepfake”.

This compliance paragraph uses the word “All Intermediaries” and hence some people have pointed out that yesterday’s statement by Mr Rajeev Chandrashekar that it is required only for Significant Intermediaries is not correct.

While the prime concern of the Government is the “Deep Fake” and the “Incorrect responses of Gemini”, it is important for us to appreciate that many companies are blindly incorporating AI solutions into their corporate offerings without proper assurances from the vendors or their own testing.

Even under the current regulations, “Compliance” is essential in using the AI and we at FDPPI have been insisting that “Accountability” is the fundamental aspect of “Responsible use of AI”.

What the Government is suggesting is that “Accountability” is being ensured under a registration system which some may say is a “Licensing” system. FDPPI has suggested that such registration can also be part of the activity of NGOs like FDPPI similar to the “Copyright Societies”.

We therefore feel that this is a good beginning for AI regulation from where we can go further for introducing a full fledged AI regulation.

Naavi

Also Read:

Lessons for AI Regulation in Rashmika Mandanna Deepfake Incident

Deepfake further erodes credibility of the Internet

AI Sand Box required to prevent a new Toolkit of Fake News

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.