AI Advisory for Intermediaries

The advisory issued by Meity on 1st March 2024 has evoked concerns in the industry about a new Compliance requirement. Though a clarification was issued today by the Minister, there are still doubts about what compliance requirements are suggested by the advisory.

It is therefore necessary to analyze the advisory and its impact.

Before we go further, it must be stated that the suggestion made in the advisory is exactly same as FDPPI/Naavi has suggested as part of the compliance requirements related to AI usage.

Para 3 of the advisory states as follows:

In the article “AI sandbox required to prevent a new Toolkit of Fake news” released on 2nd march we had stated

This is exactly what the Advisory of March 4 from the Meity has done. We therefore welcome the advisory.

The advisory is under the Intermediary guidelines and hence it does not directly impose a punishment for non compliance. However in the event of occurrence of any adverse consequence on account of a AI derivative hosted by an intermediary, the intermediary will be liable under the law without the safe harbor protection as an intermediary.

The Advisory states that all intermediaries to which it is applicable need to ensure compliance within 15 day of the advisory which would be 15th March 2024. Applicability is restricted to where the usage is when it is “In such a manner that such information may be potentially used as misinformation or deepfake”.

This compliance paragraph uses the word “All Intermediaries” and hence some people have pointed out that yesterday’s statement by Mr Rajeev Chandrashekar that it is required only for Significant Intermediaries is not correct.

While the prime concern of the Government is the “Deep Fake” and the “Incorrect responses of Gemini”, it is important for us to appreciate that many companies are blindly incorporating AI solutions into their corporate offerings without proper assurances from the vendors or their own testing.

Even under the current regulations, “Compliance” is essential in using the AI and we at FDPPI have been insisting that “Accountability” is the fundamental aspect of “Responsible use of AI”.

What the Government is suggesting is that “Accountability” is being ensured under a registration system which some may say is a “Licensing” system. FDPPI has suggested that such registration can also be part of the activity of NGOs like FDPPI similar to the “Copyright Societies”.

We therefore feel that this is a good beginning for AI regulation from where we can go further for introducing a full fledged AI regulation.

Naavi

Also Read:

Lessons for AI Regulation in Rashmika Mandanna Deepfake Incident

Deepfake further erodes credibility of the Internet

AI Sand Box required to prevent a new Toolkit of Fake News

Posted in Cyber Law | Leave a comment

AI industry needs to adopt Discipline

After the recent press reports about “Intermediaries” to be required to take permission of Meity for deploying Generative AI solutions in public platforms, there was a spate of knee jerk reactions from the industry.

One start up founder reacted …

“I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4 years full time bringing AI to this domain in India”

For one such positive contribution, we can show many negative contribution of AI. What will start up community say about “Gemini” having been deployed without proper tuning? Is it not the responsibility of the Government to put a check on “Irresponsible use of AI”?

Before jumping into to criticise the Government one should seek clarification. The clarification about the advisory of the Meity has now been issued and it is applicable only to “Significant Intermediaries” such as the Google,Instagram etc.

So, industry can relax that the prior permission is not for every use of AI in their activity. But if you want to place a “Frankenstein” or a “Potential Frankenstein” for unsuspecting public to be mislead, there is a need for regulation.

Naavi.org has always advocated “Accountability” for all AI before even talking of Responsible AI, Ai Ethics, Transparency, Explainability, Bias control etc.

I repeat, every AI developer and AI deployer should identify themselves in any of the outputs generated by the AI. For example, if I have created an AI application X using an underlying algorithm Y created by a company Z, my disclosure in the output should be

“This is generated by X on algorithm Y created by Z”.

What Meity has suggested is a move in this direction.

Naavi.org and FDPPI has suggested another innovative method for regulation which Meity or even an NGO like FDPPI can implement. It is the “Registration of an AI before release”. In this system, Z will register his algorithm Y (only ownership not the code) with the registrar and claim ownership like a Copyright/Patent. When the algorithm is licensed to X, the registration system can be invoked both by X or Z to record the user. Then the disclosure can only be a registration number which can be stenographically embedded in the output code such as the deep fake video.

This is the measure which we suggested as Ujvala innovation a few days back in these columns.

If AI is not regulated and accountability fixed, I anticipate that during this election time there will be a tool kit funded from George Soros to create chaos in the social media. We may have to shut down X , WhatsApp, Instagram and YouTube temporarily unless they put in place necessary controls. All these organizations are already considered “Significant Social Media Intermediaries” under ITA 2000 and also “Significant Data Fiduciaires” under DPDPA 2023 and there is legal framework for imposing discipline.

Genuine AI start ups need to realize that they have the responsibility  not start a fight with the Government for such regulatory measures.

Naavi

Posted in Cyber Law | Leave a comment

Chakshu Portal launched for reporting spam calls

In a welcome move, the Government of India has introduced a new mechanism for reporting Spam Calls . The portal “Chakshu” has been opened as part of the Sanchar Sathi website Chakshu is meant to be used by citizens to report suspected fraudulent communication, wherein users can report numbers, messages and phishing attempts.

The website Sanchar sathi also provides the following citizen centric services.

  1. Block your lost or stolen mobile
  2. Know your Mobile Connections (To know how many SIM cards exit in the name of a person)
  3. IMEI verification
  4. Report incoming international call with Indian number
  5. Know your wireline isp

Naavi

Posted in Cyber Law | Leave a comment

AI Sand Box required to prevent a new Toolkit of Fake News

India is a fertile ground for misuse of AI through Fake news creation and distribution. It is expected that this would grow multiple times in the next few months and there could be an international tool kit under development to use Deep Fake videos to disturb the electoral democracy of India.

The Government is hesitating to notify the DPDPA rules which could bring agencies involved in distribution of online news under reign.

In this phase we can expect that AI created deep fake would proliferate through X, WhatsApp, Instagram and YouTube.

Simultaneously this will make any information on the Internet unreliable.

The challenge is therefore to identify what is to be accepted as credible information when it is presented online.

Apart from a notification that can be given under ITA 2000 without any further need for change of law, we urge that as a part of ethical use of AI, the following measures are initiated.

It is essential that any responsible AI developer should incorporate such codes in the software that a signature of the original developer and the licensee is embedded into a creation of an image, video or text through a steganographic inscription which cannot be altered or destroyed.

Attention of MeitY is drawn to ensure that this control is notified immediately under ITA 2000 before it is too late..

This “Genuinity Tag” should be embedded in all AI and taken note of by genuine users and AI auditors as a necessary compliance measure for AI related compliance.

A regulatory agency or an NGO should take the responsibility to “Register” genuine AI” and issue certificates of reliability assurance as a part of the AI algorithm audit.

Ujvala Consultants Pvt Ltd is in the process of developing such a registration system leading to an AI-DTS evaluation.

(Watch out for more)

Naavi

Posted in Cyber Law | Leave a comment

Regulatory Sandbox of RBI and DPDPA

Yesterday, RBI also released a document namely “Enabling Framework for Regulatory Sandbox” which inter-alia attracted interest of Data Protection professionals because a reference was made about DPDPA.

RBI is a sectoral regulator and how its regulations that may overlap with DPDPA is closely watched.

Under Section 16(2) of DPDPA, which applies to Cross border transfer of personal data, it is stated that…

“Nothing contained in this section shall restrict the applicability of any law for the time being in force in India that provides for a higher degree of protection for or restriction on transfer of personal data by a Data Fiduciary outside India in relation to any personal data or Data Fiduciary or class thereof”.

Since RBI already has some stricter regulation regarding transfer of data by its Regulatory Entities (REs) which may be both personal and non personal, it is understood that those regulations will remain.

Under Section 17(1(b) certain provisions of Chapter II, Chapter III and Section 16 is not applicable for the processing of ” 0f personal data by … or any other body in India which is entrusted by law with the performance of any …. regulatory or supervisory function, where such processing is necessary for the.performance of such function;

However the new Framework for regulatory sand box for Fintech industry once the sand box scheme is approved by RBI, the Fintech regulatory compliance will be supported through some relaxations by RBI.

However,  The sandbox entity must process all the data, in its possession or under its control with regard to Regulatory Sandbox testing, in accordance with the provisions of Digital Personal Data Protection Act, 2023. In this regard, the sandbox entity should have appropriate technical and organisational measures to ensure effective compliance of the provisions of the Act and rules made thereunder. Further, the sandbox entity should ensure adequate safeguards to prevent any personal data breach.

In the event such startups are notified by MeitY under DPDPA, Section 5, Section 8(3), 8(7) , Sec 10 and 11 of the DPDPA may be exempted.

Sec 5 is “Notice”. Sec 8(3) is accuracy and updation if the data is used for disclosure or automated decision making, Section 8(7) data retention and erasure, Sec 10 is “Significant Data Fiduciary” and Section 11 is Right to access.

A Start up working inside an RBI sandbox and notified by MeitY will have the benefits of both Section 17(1)(3) with above exemptions and the RBI exemptions as provided under the notification.

The RBI notification reiterates that RBI will manage the Fintech regulations and MeitY will regulate the DPDPA regulations. There  no other special impact of the RBI regulation on DPDPA.

There is however one observation. RBI notification is currently applicable and recognizes the existence of DPDPA.. though it is yet to be notified for effect. In a way RBI is validating the effectiveness of DPDPA even today .

Naavi

Posted in Cyber Law | Leave a comment

RBI also refers to Climate Change Impact on Financial Risk

Yesterday, RBI issued a Draft Disclosure framework on Climate related financial risks 2024 applicable for regulated entities (REs). Comments / feedback, if any, may be sent by e-mail with the subject line “Comments on Disclosure framework on Climate-related Financial Risks, 2024”, by April 30, 2024.

The policies proposed will have cascading impact on the loan customers and hence is of interest to the industry also.

The disclosures for REs will cover Governance, Strategy, Risk Management, Metrics and Targets. The Governance, Strategy and Risk management may be rolled out from FY 2025-26 on wards for Banks and top layer of NBFCs. Metrics and Targets may be rolled out in the following year. For ban cooperative Banks the roll out may be deferred by an additional year and for others, the dates need to be announced.

Since the risks of Banks and NBFCs are related to those of their customers, the REs will have to collect information and impose norms for their customers in terms of not only Governance, Strategy and Risk management, but also the Metrics.

The discussion on AI and climate change appears relevant in this context since the customers who are users of AI may be required to disclose information to the investors and who in turn have to submit the consolidated information to RBI.

Naavi

Posted in Cyber Law | Leave a comment