“Responsible AI” and “Accountable AI”

As the use of AI proliferates, we are often hearing a demand for “Responsible AI”. We are however repeating that the foundation of AI being responsible is in AI being accountable. This accountability means that there should be an organization that takes up the responsibility to own the consequences of AI. This is precisely what the recent AI advisory for intermediaries from MeitY has done. 

It is important to consider that AI cannot be  treated as an independent juridical entity. It is either owned by the developer or the company developing the AI algorithm/product or the licensee. The advisory captures this aspect and hence brings in “Accountability” as the leading requirement of AI. Hence “Responsibility” of AI is embedded in the “Accountability” of the licensee/owner. 

It is the Accountability that has to be in place to make “Responsibility” an automated consequence.

While on the subject, I came across a discussion on TV about a start up “AI-Kavach” developing a Cyber Security product using AI. The promoter Ms Pratyusha Vemuri was successful in getting a funding support in the Shark Tank India Season 3 and made news.

The discussions on the shark tank is interesting since the company got a valuation of around Rs 20 crores with a funding of Rs 1 crore for a total equity of 5%. The company currently sports a modest 20000 downloads and 1500 paid downloads (at Rs 99 per year subscription) in the consumer segment. I am sure that many know that these downloads occur in a day in most of the cases if the product catches the imagination of the market. But as of today, the downloads in consumer segment seems to have been currently suspended and the Company wants to operate on the B2B segment for the time being. Not sure if this decision was after the MeitY advisory.

We wish goodluck to the entrepreneur for harnessing the power of the AI.

One of the sharks rightly took note that the algorithm has to work in the B2B environment and learn consumer behaviour by profiling which would be used for fraud detection. In other words the business model is to watch the internet and mobile behaviour by sitting on the intermediaries like Airtel and Jio networks of the entire universe of users and derive intelligence from which fraud customers can be identified. This is “Personal Data Mining” and “Big Data Processing”. For Airtel and Jio it would mean selling of personal data of their customers.

While the entrepreneur indicated that a patent is pending in India, I am doubtful whether the novelty feature would be adequate to get the approval of patent.

Using AI based processing to detect and prevent frauds is an established existing Cyber Security activity and therefore lacks patentable novelty. RBI has also made it mandatory through “Adaptive Authentication”. However, the labelling of the product as “AI” has given it a marketable value.

It is not clear if the entrepreneur nor the funding sharks  have identified the Privacy/DPDPA risks. However anonymous the process of collection is, such mass profiling  carries a high level of Privacy risks. 

In another TV interview, the entrepreneur appeared to be banking on anonymisation of the data analysis. I hope the company is able to cover the Privacy Risks with whatever  processing  they plan to do as part of the AI processing.

The B2B users would however remember that they need to adopt certain compliance measures themselves before adopting such products into their processes.

The Shark tank episode was about a month ago and the AI advisory of 4th March 2024 should be considered as a jolt to the company.

(Comments welcome)

Naavi

Also Read

https://www.reddit.com/r/sharktankindia/comments/1abqtgl/ai_kavach_is_gone/?rdt=43204

Posted in Cyber Law | Leave a comment

AI Advisory for Intermediaries

The advisory issued by Meity on 1st March 2024 has evoked concerns in the industry about a new Compliance requirement. Though a clarification was issued today by the Minister, there are still doubts about what compliance requirements are suggested by the advisory.

It is therefore necessary to analyze the advisory and its impact.

Before we go further, it must be stated that the suggestion made in the advisory is exactly same as FDPPI/Naavi has suggested as part of the compliance requirements related to AI usage.

Para 3 of the advisory states as follows:

In the article “AI sandbox required to prevent a new Toolkit of Fake news” released on 2nd march we had stated

This is exactly what the Advisory of March 4 from the Meity has done. We therefore welcome the advisory.

The advisory is under the Intermediary guidelines and hence it does not directly impose a punishment for non compliance. However in the event of occurrence of any adverse consequence on account of a AI derivative hosted by an intermediary, the intermediary will be liable under the law without the safe harbor protection as an intermediary.

The Advisory states that all intermediaries to which it is applicable need to ensure compliance within 15 day of the advisory which would be 15th March 2024. Applicability is restricted to where the usage is when it is “In such a manner that such information may be potentially used as misinformation or deepfake”.

This compliance paragraph uses the word “All Intermediaries” and hence some people have pointed out that yesterday’s statement by Mr Rajeev Chandrashekar that it is required only for Significant Intermediaries is not correct.

While the prime concern of the Government is the “Deep Fake” and the “Incorrect responses of Gemini”, it is important for us to appreciate that many companies are blindly incorporating AI solutions into their corporate offerings without proper assurances from the vendors or their own testing.

Even under the current regulations, “Compliance” is essential in using the AI and we at FDPPI have been insisting that “Accountability” is the fundamental aspect of “Responsible use of AI”.

What the Government is suggesting is that “Accountability” is being ensured under a registration system which some may say is a “Licensing” system. FDPPI has suggested that such registration can also be part of the activity of NGOs like FDPPI similar to the “Copyright Societies”.

We therefore feel that this is a good beginning for AI regulation from where we can go further for introducing a full fledged AI regulation.

Naavi

Also Read:

Lessons for AI Regulation in Rashmika Mandanna Deepfake Incident

Deepfake further erodes credibility of the Internet

AI Sand Box required to prevent a new Toolkit of Fake News

Posted in Cyber Law | Leave a comment

AI industry needs to adopt Discipline

After the recent press reports about “Intermediaries” to be required to take permission of Meity for deploying Generative AI solutions in public platforms, there was a spate of knee jerk reactions from the industry.

One start up founder reacted …

“I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4 years full time bringing AI to this domain in India”

For one such positive contribution, we can show many negative contribution of AI. What will start up community say about “Gemini” having been deployed without proper tuning? Is it not the responsibility of the Government to put a check on “Irresponsible use of AI”?

Before jumping into to criticise the Government one should seek clarification. The clarification about the advisory of the Meity has now been issued and it is applicable only to “Significant Intermediaries” such as the Google,Instagram etc.

So, industry can relax that the prior permission is not for every use of AI in their activity. But if you want to place a “Frankenstein” or a “Potential Frankenstein” for unsuspecting public to be mislead, there is a need for regulation.

Naavi.org has always advocated “Accountability” for all AI before even talking of Responsible AI, Ai Ethics, Transparency, Explainability, Bias control etc.

I repeat, every AI developer and AI deployer should identify themselves in any of the outputs generated by the AI. For example, if I have created an AI application X using an underlying algorithm Y created by a company Z, my disclosure in the output should be

“This is generated by X on algorithm Y created by Z”.

What Meity has suggested is a move in this direction.

Naavi.org and FDPPI has suggested another innovative method for regulation which Meity or even an NGO like FDPPI can implement. It is the “Registration of an AI before release”. In this system, Z will register his algorithm Y (only ownership not the code) with the registrar and claim ownership like a Copyright/Patent. When the algorithm is licensed to X, the registration system can be invoked both by X or Z to record the user. Then the disclosure can only be a registration number which can be stenographically embedded in the output code such as the deep fake video.

This is the measure which we suggested as Ujvala innovation a few days back in these columns.

If AI is not regulated and accountability fixed, I anticipate that during this election time there will be a tool kit funded from George Soros to create chaos in the social media. We may have to shut down X , WhatsApp, Instagram and YouTube temporarily unless they put in place necessary controls. All these organizations are already considered “Significant Social Media Intermediaries” under ITA 2000 and also “Significant Data Fiduciaires” under DPDPA 2023 and there is legal framework for imposing discipline.

Genuine AI start ups need to realize that they have the responsibility  not start a fight with the Government for such regulatory measures.

Naavi

Posted in Cyber Law | Leave a comment

Chakshu Portal launched for reporting spam calls

In a welcome move, the Government of India has introduced a new mechanism for reporting Spam Calls . The portal “Chakshu” has been opened as part of the Sanchar Sathi website Chakshu is meant to be used by citizens to report suspected fraudulent communication, wherein users can report numbers, messages and phishing attempts.

The website Sanchar sathi also provides the following citizen centric services.

  1. Block your lost or stolen mobile
  2. Know your Mobile Connections (To know how many SIM cards exit in the name of a person)
  3. IMEI verification
  4. Report incoming international call with Indian number
  5. Know your wireline isp

Naavi

Posted in Cyber Law | Leave a comment

AI Sand Box required to prevent a new Toolkit of Fake News

India is a fertile ground for misuse of AI through Fake news creation and distribution. It is expected that this would grow multiple times in the next few months and there could be an international tool kit under development to use Deep Fake videos to disturb the electoral democracy of India.

The Government is hesitating to notify the DPDPA rules which could bring agencies involved in distribution of online news under reign.

In this phase we can expect that AI created deep fake would proliferate through X, WhatsApp, Instagram and YouTube.

Simultaneously this will make any information on the Internet unreliable.

The challenge is therefore to identify what is to be accepted as credible information when it is presented online.

Apart from a notification that can be given under ITA 2000 without any further need for change of law, we urge that as a part of ethical use of AI, the following measures are initiated.

It is essential that any responsible AI developer should incorporate such codes in the software that a signature of the original developer and the licensee is embedded into a creation of an image, video or text through a steganographic inscription which cannot be altered or destroyed.

Attention of MeitY is drawn to ensure that this control is notified immediately under ITA 2000 before it is too late..

This “Genuinity Tag” should be embedded in all AI and taken note of by genuine users and AI auditors as a necessary compliance measure for AI related compliance.

A regulatory agency or an NGO should take the responsibility to “Register” genuine AI” and issue certificates of reliability assurance as a part of the AI algorithm audit.

Ujvala Consultants Pvt Ltd is in the process of developing such a registration system leading to an AI-DTS evaluation.

(Watch out for more)

Naavi

Posted in Cyber Law | Leave a comment

Regulatory Sandbox of RBI and DPDPA

Yesterday, RBI also released a document namely “Enabling Framework for Regulatory Sandbox” which inter-alia attracted interest of Data Protection professionals because a reference was made about DPDPA.

RBI is a sectoral regulator and how its regulations that may overlap with DPDPA is closely watched.

Under Section 16(2) of DPDPA, which applies to Cross border transfer of personal data, it is stated that…

“Nothing contained in this section shall restrict the applicability of any law for the time being in force in India that provides for a higher degree of protection for or restriction on transfer of personal data by a Data Fiduciary outside India in relation to any personal data or Data Fiduciary or class thereof”.

Since RBI already has some stricter regulation regarding transfer of data by its Regulatory Entities (REs) which may be both personal and non personal, it is understood that those regulations will remain.

Under Section 17(1(b) certain provisions of Chapter II, Chapter III and Section 16 is not applicable for the processing of ” 0f personal data by … or any other body in India which is entrusted by law with the performance of any …. regulatory or supervisory function, where such processing is necessary for the.performance of such function;

However the new Framework for regulatory sand box for Fintech industry once the sand box scheme is approved by RBI, the Fintech regulatory compliance will be supported through some relaxations by RBI.

However,  The sandbox entity must process all the data, in its possession or under its control with regard to Regulatory Sandbox testing, in accordance with the provisions of Digital Personal Data Protection Act, 2023. In this regard, the sandbox entity should have appropriate technical and organisational measures to ensure effective compliance of the provisions of the Act and rules made thereunder. Further, the sandbox entity should ensure adequate safeguards to prevent any personal data breach.

In the event such startups are notified by MeitY under DPDPA, Section 5, Section 8(3), 8(7) , Sec 10 and 11 of the DPDPA may be exempted.

Sec 5 is “Notice”. Sec 8(3) is accuracy and updation if the data is used for disclosure or automated decision making, Section 8(7) data retention and erasure, Sec 10 is “Significant Data Fiduciary” and Section 11 is Right to access.

A Start up working inside an RBI sandbox and notified by MeitY will have the benefits of both Section 17(1)(3) with above exemptions and the RBI exemptions as provided under the notification.

The RBI notification reiterates that RBI will manage the Fintech regulations and MeitY will regulate the DPDPA regulations. There  no other special impact of the RBI regulation on DPDPA.

There is however one observation. RBI notification is currently applicable and recognizes the existence of DPDPA.. though it is yet to be notified for effect. In a way RBI is validating the effectiveness of DPDPA even today .

Naavi

Posted in Cyber Law | Leave a comment