The EU Act on Artificial Intelligence

After a long deliberation, EU Parliament has adopted the EU-AI Act setting in motion a GDPR like moment where similar laws may be considered by other countries. India is committed to revise ITA 2000 and replace it with a new Act which may happen in 2024-25 after the next elections and it should include special provisions for regulating AI .

Presently Indian law addressing AI is through ITA 2000 and DPDPA 2023. ITA 2000 assigns accountability for AI to the AI developers who may transfer it to the licensees of the algorithms developed. (Section 11 of ITA 2000). Where the AI model uses personal data for its learning, DPDPA 2023 may apply and consider the algorithm user as a “Data Fiduciary” responsible for consent and accuracy of processing.

An advisory issued recently by the MeitY has suggested that platforms which permit hosting of AI derivatives (eg Videos) need to take permission of MeitY.

DGPSI, which is a framework for implementation of DPDPA 2023 suggests AI algorithm vendor to be considered as a “Data Processor/”Joint Data Fiduciary” and conduct a DPIA before its adoption.

In the light of the above, we can quickly understand the approach of EU AI act and draw some thoughts from there for implementing “Due Diligence” while using AI in data processing.

The approach of EU-AI act is to define AI and classify the AI algorithms on the basis of risks and provide a graded regulatory control starting from no control to banning.

The Act defines AI as follows:

A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments

The main distinguishing feature of AI is that all softwares have coded instructions which get executed automatically in sequence. AI has one special instruction that involves an instruction to modify the code on certain conditionalities so that it becomes self correcting. This aspect has been captured in the definition.

However, the more critical aspect of “Drawing inference from inputs and generating outputs” is when the input is a visual or a sound that the AI can match with its machine learning process and identify with a specific character and respond based on the input. For example, if there is a sound, AI may infer, this is the sound of naavi and respond. This is “Voice Recognition” and involves referring to the earlier data base of voices that the AI can remember or refer to. Similarly when it sees a visual of a person with a raised hand holding a weapon and moving nearer, it may sense an “Attack” based again on its earlier machine learning process.

At the end of the day, even these responses are a re-play of an earlier input and hence the hands of the developer can be identified with the response. In real life, an action of a minor is ascribed to the Parent as long as the person is a minor. After attaining majority the responsibility shifts to the erstwhile minor.

Similarly the AI has to be recognized with reference to its “Maturity” and identified as a “Emancipated AI” or a “Dependent AI”.

This difference is not captured by EU-AI Act.

The EU Act only identifies the type of decisions that an AI generates and tries to identify “Risks” and incorporate it in its classification tag. This is like identifying that a knife in the hands of a child is a risk but a knife in the hands of an adult as not a risk since the maturity of the algorithm is not the consideration but the identified risk is. Whether this is fine at the current stage or could have been improved is a matter of debate.

The five suggested classifications are

  1. Unacceptable Risk
  2. High Risk
  3. Low Risk
  4. Generative AI
  5. No Risk

The unacceptable Risk AIs are banned and includes

  • Behavioral manipulation or deceptive techniques to get people to do things they would otherwise not
  • Targeting people due to things like age or disability to change their behavior and/or exploit them
  • Biometric categorization systems, to try to classify people according to highly sensitive traits
  • Personality characteristic assessments leading to social scoring or differential treatment
  • “Real-time” biometric identification for law enforcement outside of a select set of use cases (targeted search for missing or abducted persons, imminent threat to life or safety/terrorism, or prosecution of a specific crime)
  • Predictive policing (predicting that people are going to commit crime in the future)
  • Broad facial recognition/biometric scanning or data scraping
  • Emotion inferring systems in education or work without a medical or safety purpose

This categorization seriously affect the use of AI in policing. This is like banning the knife whether it is used by a child or an adult.

On the other hand a “Purpose Based” classification such as “Use of predictive policing” is permitted under certain controlled conditions but not otherwise could have been an approach to be considered. We know that EU does not trust the Governments and hence it was natural for them to take this stand. India cannot take such a stand.

This type of approach says “Privacy is the birth right of Criminals”. “Security is not the right of honest Citizens”. It is my view that this approach should be unacceptable in India.

However knowing the behaviour of our Courts we can predict that if a law is introduced in India that will uphold use of AI for security, it will be challenged in the Court.

EU Act concedes that use of realtime biometric identification for law enforcement may be permitted in certain instances such as targeted search of missing missing or abducted persons or cases of crime and terrorism. Fortunately the current DPDPA 2023 does recognize “Instrumentalities of State” that may be exempted from Data Fiduciary responsibilities in certain circumstances.

Behavioural manipulation, profiling people on the basis of biometric categorization, are banned under the EU Act.

The second category of AI s namely the High Risk include AI in medical devices, Vehicles, Policing and emotion recognition systems.

It is noted that emotional inferring is “Banned” under the act but emotion recognition systems are classified as high-risk and not unacceptable risk. This could place a doubt on whether humanoid robots under development which include emotional expression capture and response would be one of the non permissive uses. Similarly AI in policing is in high risk category but “Broad facial recognition” or “predictive policing involving profiling of people as to whether they are likely to commit crimes in future” is in the banned list.

This overlapping of “Unacceptable and High Risks” could lead to confusion as we go on. The overlapping suggests that we should consider the classification more on the purpose of use rather than the type of AI. Requires more debate to understand the compliance obligations arising out of the classification of AI.

The use of AI in deepfake situations is considered “No Risk” and is another area on which India needs to take a different stand.

The summary of observations is that the

1.”Banning” of certain AI systems may be disrupting innovation

2.Risk classification is unclear and overlapping.

3.Maturity of Machine Learning process is not considered for classification.

4.In classification there is a mix up of purpose of use and the nature of the  algorithm which needs clarity.

There is no doubt that legislation of this type is complex and credit is due for attempting it. India should consider improving upon it.

Reference Articles:

Clear View

Compliance Checker tool

Posted in Cyber Law | Leave a comment

DGPSI and Data Valuation

DGPSI or Data Governance and Protection Standard of India has been adopted as a framework for implementing DPDPA 2023 by FDPPI. (Foundation of Data Protection Professionals in India).

In order to ensure that companies donot neglect the importance of recognizing the value of data, DGPSI marks the need for Data Valuation as a model implementation specification under the framework.

Model implementation number 9 (MIS-9) of DGPSI (Full) framework states

“Organization shall establish an appropriate  policy to recognize the financial value of data and assign a notional financial value to each data set and bring appropriate visibility to the value of personal data assets managed by the organization to the relevant stakeholders”

Also Model Implementation number 13 (MIS-13) states

“Organization shall establish a Policy for Data Monetization in a manner compliant with law.”

These two specifications ensure that DGPSI based implementation will draw the attention of the management  to the need for data valuation though the organizations may decide not to implement the recommendation and exercise their option of risk absorption by not complying with this specification.

The data valuation in the personal data scenario is interesting because the data protection laws affect the data value.

Accordingly if Personal data has no consent or consent is restricted for a given purpose, the value will accordingly get adjusted. Data for which consent is withdrawn or purpose has expired should be depreciated. The accuracy of data also influences the value.

These aspects make Data valuation in personal data context a little more complicated than in a non personal data scenario. More discussions are required in this regard to arrive at a consensus.

The DVSI model recommends a two stage valuation of personal data. In the first stage it requires computation of the  intrinsic value based on normal principles such as cost of acquisition, market value etc but later use a weightage based on a value multiplier index indicated in the following matrix which considers the quality of data including the legal implications.

This is a suggestion which requires further discussion by the professional circles. 

Posted in Cyber Law | Leave a comment

Insights on Privacy in Banks

Naavi/FDPPI had recently announced that we would provide a free assessment of DPDPA-2023 compliance on websites and provide an assurance tag “WEB-DTS”. However when we went through some of the requests, it was found that none of the websites met the minimum criteria for Web-DTS certification. It was a disappointment that the simple compliance requirements which should have already been in place now remained unattended.

In this context, it was interesting to find from a report from an company engaged in development of compliance software that in a survey of 10 websites of top Banks, it was found that the simplest of compliance namely “Cookie Management” on the websites was found wanting.  A glimpse of the findings of the cookies is indicated below.

If the most equipped organizations like Banks cannot complete the simplest of compliance requirements such as cookie management on a website, it would be an uphill task to ensure that they have to be compliant with DPDPA 2023 before the year end.

Currently FDPPI is offering DPDPA 2023 assessment service through the DGPSI framework and suggested the first step of Web-DTS for compliance of the website.

For its corporate members, FDPPI is providing some services which could include “Consent Record Management” service. The first milestone for this is the WebDTS and Cookie management. In this context the report on the current status of Cookie management in Banks is revealing.

Naavi

Posted in Cyber Law | Leave a comment

Naavi unveils Naavi-63 as a Consent Recording System under DPDPA

During the Crash course on DPDPA 2023 implementation held in Mumbai and Ahmedabad on March 9th and 10th to groups of CIOs, Naavi announced that as part of the consultancy services offered under the DGPSI framework, a “Consent Recording System” is being introduced to meet the compliance requirements.

The system called Naavi-63 will enable companies to meet the compliance of DPDPA 2023 along with ITA 2000 in obtaining the consents and will be implemented as part of the consultancy.

Naavi

Posted in Cyber Law | Leave a comment

FDPPI- The Privacy Companion of India

Most companies in India have realized that the month March 2024 will soon be over and a new financial year 2024-25 will be before us.

While the politics of the country is keenly following the Lokasabha elections of 2024 and its impact, the corporate world in India is slowly realizing that the financial year 2024-25 will be the year in which DPDPA 2023 will be implemented fully.

Most companies who were considering that the “Rules” are yet to be notified, are realizing that the entire DPDPA 2023 is an extension of ITA 2000 and hence the Courts are already taking cognizance of the presence of the “Privacy Culture”.

As the plans for the year 2024-25 are being drawn up, Companies are asking themselves whether they are Significant Data Fiduciaries (SDF) and need to look for designating a SPO and appoint an external Data Auditor.

To address this concern, FDPPI is emerging as the “Privacy Companion” which can be the friend, philosopher and guide for companies intending to be compliant with DPDPA 2023.

As the undersigned travels round the country conducting awareness training sessions to professionals of different industries, it is clear that the initial lethargy is giving into a sense of urgency in setting in motion in-house training programs and DPDPA Impact Assessment programs for the DPDPA implementation to follow.

With its unique Compliance framework “DGPSI” or “Data Governance and Protection Standard of India, FDPPI is all set to be the much sought after “Privacy Companion” for the industry. The framework encourages the companies to set up a Data Governance and Protection management system (DGPMS) which can achieve DPDPA 2023 compliance by default.

The Privacy Enhancement Tools (PET) that accompany the DGPSI framework further increases the convenience of compliance.

The DGPSI-PET is the next big thing that the undersigned is working on with a mission to make DPDPA 2023 compliance a smooth corporate activity.

Naavi’s DGPSI-PET system encompasses several companion tools such as

  • Consent Management and Recording Companion
  • Personal Data Discovery Companion
  • Personal Data Classification Companion
  • AI algorithm Privacy companion

Watch out for more on these tools to be unleashed during the year.

Naavi intends to shortly release the Consent Management system in a customized building mode under the name Naavi-63. …More information will follow.

Naavi

Posted in Cyber Law | Leave a comment

New Indian Evidence Act and the new Section 65B Certification

With the notification of the Bharatiya Sakshya Adhiniyam 2023 as the new Indian Evidence Act (NIEA), time has come to take a fresh look at Section 65B Certification and the operations of Cyber Evidence Archival Center (CEAC) of which Naavi was the pioneer. The act will be effective from 1st July 2024.

It is well known that the first ever Section 65B certificate to be produced in the Court was the one presented by Naavi at the AMM Court in Egmore Chennai in the case of State of Tamil Nadu Vs Suhas Katti.

This case involved a message posted on an Yahoo group which was accused of being “Obscene” under the then Section 67 of ITA 2000. The copy of the content was produced by Naavi with a Section 65B certificate as an observation on the Internet and based on the same the Court convicted the accused. The decision was upheld by the Session Court and the accused completed the 9 months of imprisonment that the Session Court imposed though the trial court had imposed a 2 year imprisonment under Section 67 of ITA 2000.

During the trial questions had been raised about whether a private person can provide the Certificate. Subsequently the same Court had further validated the system in another case where some material on CD seized by the police need to be taken up for trial.

After this 2004 incident, there was the 2005 Supreme Court trial of Afzal Guru in which the Supreme Court took oral evidence as a substitute of a Section 65B evidence. This was over ruled and a complete ratio was indicated in the Basheer judgement. Subsequently came the contradictory judgement of Shafi Mohammed followed and later over turned in the Arjun Pandit Rao judgement.

Naavi has been the person who has contributed to the development of Cyber Jurisprudence in this regard.

Now with the passage of the Bharatiya Sakshya Adhiniyam, the old Indian Evidence Act with Section 65B has been replaced with the new Act with Section 63 which states as under.

63.(1) Notwithstanding anything contained in this Adhiniyam, any information contained in an electronic record which is printed on paper, stored, recorded or copied in optical or magnetic media or semiconductor memory which is produced by a computer or any communication device or otherwise stored, recorded or copied in any electronic form (hereinafter referred to as the computer output) shall be deemed to be also a document, if the conditions mentioned in this section are satisfied in relation to the information and computer in question and shall be admissible in any proceedings, without further proof or production of the original, as evidence or any contents of the original or of any fact stated therein of which direct evidence would be admissible.
(2) The conditions referred to in sub-section (1) in respect of a computer output shall be the following, namely:—

  • (a) the computer output containing the information was produced by the computer or communication device during the period over which the computer or communication device was used regularly to create, store or process information for the purposes of any activity regularly carried on over that period by the person having lawful control over the use of the computer or communication device;
  • (b) during the said period, information of the kind contained in the electronic record or of the kind from which the information so contained is derived was regularly fed into the computer or communication device in the ordinary course of the said activities;
  • (c) throughout the material part of the said period, the computer or communication device was operating properly or, if not, then in respect of any period in which it was not operating properly or was out of operation during that part of the period, was not such as to affect the electronic record or the accuracy of its contents; and
  • (d) the information contained in the electronic record reproduces or is derived from such information fed into the computer or communication device in the ordinary course of the said activities.


(3) Where over any period, the function of creating, storing or processing information for the purposes of any activity regularly carried on over that period as mentioned in clause (a) of sub-section (2) was regularly performed by means of one or more computers or communication device, whether—

  • (a) in standalone mode; or
  • (b) on a computer system; or
  • (c) on a computer network; or
  • (d) on a computer resource enabling information creation or providing information processing and storage; or
  • (e) through an intermediary,


all the computers or communication devices used for that purpose during that period shall be treated for the purposes of this section as constituting a single computer or communication device; and references in this section to a computer or communication device shall be construed accordingly.


(4) In any proceeding where it is desired to give a statement in evidence by virtue of this section, a certificate doing any of the following things shall be submitted along with the

  • (a) identifying the electronic record containing the statement and describing the manner in which it was produced;
  • (b) giving such particulars of any device involved in the production of that electronic record as may be appropriate for the purpose of showing that the electronic record was produced by a computer or a communication device referred to in clauses (a)to (e) of sub-section (3);
  • (c) dealing with any of the matters to which the conditions mentioned in sub-section (2) relate, and purporting to be signed by a person in charge of the computer or communication device or the management of the relevant activities (whichever is appropriate) and an expert shall be evidence of any matter stated in the certificate; and for the purposes of this sub-section it shall be sufficient for a matter to be stated to the best of the knowledge and belief of the person stating it in the certificate specified in the Schedule.

(5) For the purposes of this section,—

  • (a) information shall be taken to be supplied to a computer or communication device if it is supplied thereto in any appropriate form and whether it is so supplied directly or (with or without human intervention) by means of any appropriate equipment;
  • (b) a computer output shall be taken to have been produced by a computer or communication device whether it was produced by it directly or (with or without human intervention) by means of any appropriate equipment or by other electronic means as referred to in clauses (a) to (e) of sub-section (3).

Additionally, a Schedule has been added with the format of a Certificate to be issued under section 63(4)(c).

The narrative on Section 65B has therefore changed to some extent. Watch out for a new E Book on this topic.

The new Certifications that Naavi would be providing under the new section will henceforth  be called “Section 63 BSA Certificate”. 

The Cyber Evidence Archival Center (CEAC) is presently restricting its  operations to certificates being issued through franchisees. Naavi personally has stopped issuing such certificates to restrict the attendant Court appearances.

However, consequent to the introduction of DPDPA 2023, one of the services of CEAC namely the CEAC-EDB is being modified as a service for DPDPA 2023 compliance details of which will be provided separately.  

Naavi

Posted in Cyber Law | Leave a comment