AI Risk Management under DPDPA 2023

“Artificial Intelligence” is a new term that is sweeping the software world and naturally it has also percolated into the discussions of “Privacy” and DPDPA 2023.

The industry is now presented with a new ISO standard 42001 so that along with ISMS, PIMS, the concept of AIMS has now become the buzzword.

ISO 42001 is a standard that tries to establish the requirements of an AIMS (Artificial Intelligence Management System” that will focus on the system being a “Responsible AI System”. The standard can be used both by the Ai developer as well as the user.

Though the standard should be a good guideline for many companies, it appears that as regards privacy, the AIMS as suggested needs some more tweaking.

AIMS as is envisaged is like PIMS and has to be considered part of the ISMS. In otherwords, though a stand alone certification is envisaged under ISO 42001, an organization cannot avoid ISO27701 and ISO 27001 if it has to adopt ISO 42001 for Privacy. In other words about 40 new controls will get added to 93 controls of ISO 27001 and 49 controls of ISO 27701.

In the DGPSI system FDPPI proposes to consider AIMS, PIMS and ISMS as part of the DGPMS and accommodates all the controls within 50 implementation specifications. In this approach most of the individual controls of the ISO system that makes it bulky and unwieldy get absorbed in the customization of controls through the policies and processes developed in the user environment.

We hope this simplification would be useful to the industry and leave the scope for designing the controls by the implementers as per their specific needs.

Naavi

Posted in Cyber Law | Leave a comment

Implications of US Bill on Cross border data transfer

A Bill has been passed in US to protect the sensitive data of US citizens by restricting cross border data to countries considered as “Adversaries”. To some extent this reflects the thought behind Section 16 of DPDPA 2023 which also has an enabling provision to restrict transfer of personal data collected in India to other countries which may be notified as “Blacklisted Countries”. China, Iran, North Korea, Cuba, Russia and the Maduro government in Venezuela are currently in the list of such adversaries. India is yet to declare the black list of countries under Section 16.

(Refer report in cnbc.com)

The bill bans organizations that profit from selling personal data, known as data brokers, from making data accessible to a foreign adversary country or entities controlled by adversaries.

It also authorizes the Federal Trade Commission to seek civil penalties of more than $50,000 for each violation.

India has to keep on guard that this list of countries donot become too flexible to include any country on which adhoc sanctions are imposed. We may recall that in the early days of Ukraine war many US companies cited the US sanctions to threaten stoppage of IT services in India. This makes the dependence of the country on US companies including companies like Microsoft and Google a long term national security risk.

Naavi

Posted in Cyber Law | Leave a comment

“We Want License to misinform”.. ???

Mr Kunal Kamra the “Stand Up Comedian” has approached the Supreme Court to challenge the Central Government notification for forming Fact Checking Units (FCUs) under the 2023 Information Technology Amendment Rules (IT Amendment rules 2023).

Effectively it is a plea that demands the right to call oneself a “Comedian” and publish false information in the guise of parody or fun.

The earlier attempt to get a stay on the Meity rules was caught in a split judgement at Mumbai High Court and no “Stay” was granted. Now that the elections are in the offing and there is a dire need on the part of some people to spread false news the petitioners have again approached the judicial system with an appeal at the Supreme Court.

On 20th March, the MeitY issued a notification as follows

According to the news report at Bar and Bench ,Mr Kamra pressed for an immediate stay of the rules stating

“While the Impugned Rules (and the IT Rules in general) are facially directed at intermediaries, it is users (and the information created and hosted by them on various platforms) that are the subject of the Impugned Rules … [it] is extremely broad in its sweep, and would operate to muzzle speech against the Central Government,” 

Mr Kamra falsely claimed

“The units would be empowered to direct social media companies to take down any content the government deems fake, false or misleading, but without due process, “

This is false since the FCU only has the power to notify that a certain news is fake and does not have the right to give any direction to the intermediary.

He further contends

“The Impugned Rules do not contemplate the issuance of a notice to the user prior to the identification of information by the FCU, or prior to the takedown by the intermediary … they would inter alia apply to any content hosted by intermediaries that contradict facts, figures or data of the government,”

This is an impossibility since the identity of the user is known only to the platform and the FCU will not know the source except as declared. The Platform as an intermediary is responsible for notifying its users not to do things which are illegal . Spreading false information is illegal per-se.

Mr Kamra says

“Petitioner, who is a political satirist, makes his living out of commenting on what can be stated to be the “business of the Central Government”. Further, most of the Petitioner’s engagement with the public is driven by social media. Any hindrance in accessing the accounts of the Petitioner will be a violation of his fundamental right to practice his profession … will affect his livelihood and right to live with dignity,”

It is to be noted that the so called fundamental right of the petitioner is an infringement on the fundamental right of the citizens who are entitled not to be cheated and mislead by false information.

The petition is set to be heard by the Supreme Court today. Knowing the Supreme Court, we will not be surprised if the Supreme Court grants the stay though Mr Kapil Sibal and Abhishek Manu Singhvi are not representing the petitioner.

Before such a knee jerk reaction, I urge the Supreme Court to read the words and intentions behind the relevant notification and why the petition is a demand to disrupt fair information dissemination during the election time when there is a model code of conduct. Mr Kamra wants to indulge in election campaign without being bothered by the need to maintain truth.

I would like to reproduce the relevant clause 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 under which PIB has been now notified as a Fact Checking Unit.

Due diligence by an intermediary: (Para 3.1)

An intermediary, including [social media intermediary, significant social media intermediary and online gaming intermediary], shall observe the following due diligence while discharging its duties, namely:—

(b) the intermediary shall inform its rules and regulations, privacy policy and user agreement to the user in English or any language specified in the Eighth Schedule to the Constitution in the language of his choice and shall make reasonable efforts to cause the user of its computer resource not to host, display, upload, modify,publish, transmit, store, update or share any information that,—

(v) deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or is identified as fake or false by the fact check unit at the Press Information Bureau of the Ministry of Information and Broadcasting or other agency authorised by the Central Government for fact checking or, in respect of any business of the Central Government, by its department in which such business is transacted under the rules of business made under clause (3) of article 77 of the Constitution];

The petitioners and the Court needs to understand that the guideline is a “due diligence requirement” where the intermediary shall inform through its published policy document that any of the users of the platform shall not “Knowingly” or “Intentionally” host mis information which is “Patently false” or identified as false by the Fact Check unit.

It is to be noted that the requirement does not penalize the person hosting the content nor the platform. It will only deny the platform the protection under Section 79 of ITA 2000 in the event of any Court being moved that the “False” information is causing any damage.

It is a figment of imagination of the petitioner that because of the possibility of a future case being hoisted on the platform and because in such an event the defence of Section 79 is not available to the platform, the platform will today not allow Mr Kamra to host his shows which by his own admission enables him to “makes his living out of commenting on what can be stated to be the “business of the Central Government” which is a high risk business which can disrupt the electoral democracy in the country.

Naavi.org as a representative of the citizens of India  contend that the  petition is a request for license to indulge in mischief and disturb the proper conduct of the elections in India. Allowing  mis information without accountability of the platform will not only enable Mr Kamra to misuse the social media but also encourage use of deepfake and other technologies to mis-inform the public at a time there will be no remedy available to prevent damage to the electoral process.

Hence we hope that the Supreme Court does not allow this petition particularly at this point of time when the nation is into an election. Also the Court should consider the views of the election commission in this regard and not issue any stay based on the petition which will be unjustified and contribute to the vitiation of the electoral atmosphere.

Naavi

Posted in Cyber Law | 1 Comment

The EU Act on Artificial Intelligence

After a long deliberation, EU Parliament has adopted the EU-AI Act setting in motion a GDPR like moment where similar laws may be considered by other countries. India is committed to revise ITA 2000 and replace it with a new Act which may happen in 2024-25 after the next elections and it should include special provisions for regulating AI .

Presently Indian law addressing AI is through ITA 2000 and DPDPA 2023. ITA 2000 assigns accountability for AI to the AI developers who may transfer it to the licensees of the algorithms developed. (Section 11 of ITA 2000). Where the AI model uses personal data for its learning, DPDPA 2023 may apply and consider the algorithm user as a “Data Fiduciary” responsible for consent and accuracy of processing.

An advisory issued recently by the MeitY has suggested that platforms which permit hosting of AI derivatives (eg Videos) need to take permission of MeitY.

DGPSI, which is a framework for implementation of DPDPA 2023 suggests AI algorithm vendor to be considered as a “Data Processor/”Joint Data Fiduciary” and conduct a DPIA before its adoption.

In the light of the above, we can quickly understand the approach of EU AI act and draw some thoughts from there for implementing “Due Diligence” while using AI in data processing.

The approach of EU-AI act is to define AI and classify the AI algorithms on the basis of risks and provide a graded regulatory control starting from no control to banning.

The Act defines AI as follows:

A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments

The main distinguishing feature of AI is that all softwares have coded instructions which get executed automatically in sequence. AI has one special instruction that involves an instruction to modify the code on certain conditionalities so that it becomes self correcting. This aspect has been captured in the definition.

However, the more critical aspect of “Drawing inference from inputs and generating outputs” is when the input is a visual or a sound that the AI can match with its machine learning process and identify with a specific character and respond based on the input. For example, if there is a sound, AI may infer, this is the sound of naavi and respond. This is “Voice Recognition” and involves referring to the earlier data base of voices that the AI can remember or refer to. Similarly when it sees a visual of a person with a raised hand holding a weapon and moving nearer, it may sense an “Attack” based again on its earlier machine learning process.

At the end of the day, even these responses are a re-play of an earlier input and hence the hands of the developer can be identified with the response. In real life, an action of a minor is ascribed to the Parent as long as the person is a minor. After attaining majority the responsibility shifts to the erstwhile minor.

Similarly the AI has to be recognized with reference to its “Maturity” and identified as a “Emancipated AI” or a “Dependent AI”.

This difference is not captured by EU-AI Act.

The EU Act only identifies the type of decisions that an AI generates and tries to identify “Risks” and incorporate it in its classification tag. This is like identifying that a knife in the hands of a child is a risk but a knife in the hands of an adult as not a risk since the maturity of the algorithm is not the consideration but the identified risk is. Whether this is fine at the current stage or could have been improved is a matter of debate.

The five suggested classifications are

  1. Unacceptable Risk
  2. High Risk
  3. Low Risk
  4. Generative AI
  5. No Risk

The unacceptable Risk AIs are banned and includes

  • Behavioral manipulation or deceptive techniques to get people to do things they would otherwise not
  • Targeting people due to things like age or disability to change their behavior and/or exploit them
  • Biometric categorization systems, to try to classify people according to highly sensitive traits
  • Personality characteristic assessments leading to social scoring or differential treatment
  • “Real-time” biometric identification for law enforcement outside of a select set of use cases (targeted search for missing or abducted persons, imminent threat to life or safety/terrorism, or prosecution of a specific crime)
  • Predictive policing (predicting that people are going to commit crime in the future)
  • Broad facial recognition/biometric scanning or data scraping
  • Emotion inferring systems in education or work without a medical or safety purpose

This categorization seriously affect the use of AI in policing. This is like banning the knife whether it is used by a child or an adult.

On the other hand a “Purpose Based” classification such as “Use of predictive policing” is permitted under certain controlled conditions but not otherwise could have been an approach to be considered. We know that EU does not trust the Governments and hence it was natural for them to take this stand. India cannot take such a stand.

This type of approach says “Privacy is the birth right of Criminals”. “Security is not the right of honest Citizens”. It is my view that this approach should be unacceptable in India.

However knowing the behaviour of our Courts we can predict that if a law is introduced in India that will uphold use of AI for security, it will be challenged in the Court.

EU Act concedes that use of realtime biometric identification for law enforcement may be permitted in certain instances such as targeted search of missing missing or abducted persons or cases of crime and terrorism. Fortunately the current DPDPA 2023 does recognize “Instrumentalities of State” that may be exempted from Data Fiduciary responsibilities in certain circumstances.

Behavioural manipulation, profiling people on the basis of biometric categorization, are banned under the EU Act.

The second category of AI s namely the High Risk include AI in medical devices, Vehicles, Policing and emotion recognition systems.

It is noted that emotional inferring is “Banned” under the act but emotion recognition systems are classified as high-risk and not unacceptable risk. This could place a doubt on whether humanoid robots under development which include emotional expression capture and response would be one of the non permissive uses. Similarly AI in policing is in high risk category but “Broad facial recognition” or “predictive policing involving profiling of people as to whether they are likely to commit crimes in future” is in the banned list.

This overlapping of “Unacceptable and High Risks” could lead to confusion as we go on. The overlapping suggests that we should consider the classification more on the purpose of use rather than the type of AI. Requires more debate to understand the compliance obligations arising out of the classification of AI.

The use of AI in deepfake situations is considered “No Risk” and is another area on which India needs to take a different stand.

The summary of observations is that the

1.”Banning” of certain AI systems may be disrupting innovation

2.Risk classification is unclear and overlapping.

3.Maturity of Machine Learning process is not considered for classification.

4.In classification there is a mix up of purpose of use and the nature of the  algorithm which needs clarity.

There is no doubt that legislation of this type is complex and credit is due for attempting it. India should consider improving upon it.

Reference Articles:

Clear View

Compliance Checker tool

Posted in Cyber Law | Leave a comment

DGPSI and Data Valuation

DGPSI or Data Governance and Protection Standard of India has been adopted as a framework for implementing DPDPA 2023 by FDPPI. (Foundation of Data Protection Professionals in India).

In order to ensure that companies donot neglect the importance of recognizing the value of data, DGPSI marks the need for Data Valuation as a model implementation specification under the framework.

Model implementation number 9 (MIS-9) of DGPSI (Full) framework states

“Organization shall establish an appropriate  policy to recognize the financial value of data and assign a notional financial value to each data set and bring appropriate visibility to the value of personal data assets managed by the organization to the relevant stakeholders”

Also Model Implementation number 13 (MIS-13) states

“Organization shall establish a Policy for Data Monetization in a manner compliant with law.”

These two specifications ensure that DGPSI based implementation will draw the attention of the management  to the need for data valuation though the organizations may decide not to implement the recommendation and exercise their option of risk absorption by not complying with this specification.

The data valuation in the personal data scenario is interesting because the data protection laws affect the data value.

Accordingly if Personal data has no consent or consent is restricted for a given purpose, the value will accordingly get adjusted. Data for which consent is withdrawn or purpose has expired should be depreciated. The accuracy of data also influences the value.

These aspects make Data valuation in personal data context a little more complicated than in a non personal data scenario. More discussions are required in this regard to arrive at a consensus.

The DVSI model recommends a two stage valuation of personal data. In the first stage it requires computation of the  intrinsic value based on normal principles such as cost of acquisition, market value etc but later use a weightage based on a value multiplier index indicated in the following matrix which considers the quality of data including the legal implications.

This is a suggestion which requires further discussion by the professional circles. 

Posted in Cyber Law | Leave a comment

Insights on Privacy in Banks

Naavi/FDPPI had recently announced that we would provide a free assessment of DPDPA-2023 compliance on websites and provide an assurance tag “WEB-DTS”. However when we went through some of the requests, it was found that none of the websites met the minimum criteria for Web-DTS certification. It was a disappointment that the simple compliance requirements which should have already been in place now remained unattended.

In this context, it was interesting to find from a report from an company engaged in development of compliance software that in a survey of 10 websites of top Banks, it was found that the simplest of compliance namely “Cookie Management” on the websites was found wanting.  A glimpse of the findings of the cookies is indicated below.

If the most equipped organizations like Banks cannot complete the simplest of compliance requirements such as cookie management on a website, it would be an uphill task to ensure that they have to be compliant with DPDPA 2023 before the year end.

Currently FDPPI is offering DPDPA 2023 assessment service through the DGPSI framework and suggested the first step of Web-DTS for compliance of the website.

For its corporate members, FDPPI is providing some services which could include “Consent Record Management” service. The first milestone for this is the WebDTS and Cookie management. In this context the report on the current status of Cookie management in Banks is revealing.

Naavi

Posted in Cyber Law | Leave a comment