The EU Act on Artificial Intelligence

After a long deliberation, EU Parliament has adopted the EU-AI Act setting in motion a GDPR like moment where similar laws may be considered by other countries. India is committed to revise ITA 2000 and replace it with a new Act which may happen in 2024-25 after the next elections and it should include special provisions for regulating AI .

Presently Indian law addressing AI is through ITA 2000 and DPDPA 2023. ITA 2000 assigns accountability for AI to the AI developers who may transfer it to the licensees of the algorithms developed. (Section 11 of ITA 2000). Where the AI model uses personal data for its learning, DPDPA 2023 may apply and consider the algorithm user as a “Data Fiduciary” responsible for consent and accuracy of processing.

An advisory issued recently by the MeitY has suggested that platforms which permit hosting of AI derivatives (eg Videos) need to take permission of MeitY.

DGPSI, which is a framework for implementation of DPDPA 2023 suggests AI algorithm vendor to be considered as a “Data Processor/”Joint Data Fiduciary” and conduct a DPIA before its adoption.

In the light of the above, we can quickly understand the approach of EU AI act and draw some thoughts from there for implementing “Due Diligence” while using AI in data processing.

The approach of EU-AI act is to define AI and classify the AI algorithms on the basis of risks and provide a graded regulatory control starting from no control to banning.

The Act defines AI as follows:

A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments

The main distinguishing feature of AI is that all softwares have coded instructions which get executed automatically in sequence. AI has one special instruction that involves an instruction to modify the code on certain conditionalities so that it becomes self correcting. This aspect has been captured in the definition.

However, the more critical aspect of “Drawing inference from inputs and generating outputs” is when the input is a visual or a sound that the AI can match with its machine learning process and identify with a specific character and respond based on the input. For example, if there is a sound, AI may infer, this is the sound of naavi and respond. This is “Voice Recognition” and involves referring to the earlier data base of voices that the AI can remember or refer to. Similarly when it sees a visual of a person with a raised hand holding a weapon and moving nearer, it may sense an “Attack” based again on its earlier machine learning process.

At the end of the day, even these responses are a re-play of an earlier input and hence the hands of the developer can be identified with the response. In real life, an action of a minor is ascribed to the Parent as long as the person is a minor. After attaining majority the responsibility shifts to the erstwhile minor.

Similarly the AI has to be recognized with reference to its “Maturity” and identified as a “Emancipated AI” or a “Dependent AI”.

This difference is not captured by EU-AI Act.

The EU Act only identifies the type of decisions that an AI generates and tries to identify “Risks” and incorporate it in its classification tag. This is like identifying that a knife in the hands of a child is a risk but a knife in the hands of an adult as not a risk since the maturity of the algorithm is not the consideration but the identified risk is. Whether this is fine at the current stage or could have been improved is a matter of debate.

The five suggested classifications are

  1. Unacceptable Risk
  2. High Risk
  3. Low Risk
  4. Generative AI
  5. No Risk

The unacceptable Risk AIs are banned and includes

  • Behavioral manipulation or deceptive techniques to get people to do things they would otherwise not
  • Targeting people due to things like age or disability to change their behavior and/or exploit them
  • Biometric categorization systems, to try to classify people according to highly sensitive traits
  • Personality characteristic assessments leading to social scoring or differential treatment
  • “Real-time” biometric identification for law enforcement outside of a select set of use cases (targeted search for missing or abducted persons, imminent threat to life or safety/terrorism, or prosecution of a specific crime)
  • Predictive policing (predicting that people are going to commit crime in the future)
  • Broad facial recognition/biometric scanning or data scraping
  • Emotion inferring systems in education or work without a medical or safety purpose

This categorization seriously affect the use of AI in policing. This is like banning the knife whether it is used by a child or an adult.

On the other hand a “Purpose Based” classification such as “Use of predictive policing” is permitted under certain controlled conditions but not otherwise could have been an approach to be considered. We know that EU does not trust the Governments and hence it was natural for them to take this stand. India cannot take such a stand.

This type of approach says “Privacy is the birth right of Criminals”. “Security is not the right of honest Citizens”. It is my view that this approach should be unacceptable in India.

However knowing the behaviour of our Courts we can predict that if a law is introduced in India that will uphold use of AI for security, it will be challenged in the Court.

EU Act concedes that use of realtime biometric identification for law enforcement may be permitted in certain instances such as targeted search of missing missing or abducted persons or cases of crime and terrorism. Fortunately the current DPDPA 2023 does recognize “Instrumentalities of State” that may be exempted from Data Fiduciary responsibilities in certain circumstances.

Behavioural manipulation, profiling people on the basis of biometric categorization, are banned under the EU Act.

The second category of AI s namely the High Risk include AI in medical devices, Vehicles, Policing and emotion recognition systems.

It is noted that emotional inferring is “Banned” under the act but emotion recognition systems are classified as high-risk and not unacceptable risk. This could place a doubt on whether humanoid robots under development which include emotional expression capture and response would be one of the non permissive uses. Similarly AI in policing is in high risk category but “Broad facial recognition” or “predictive policing involving profiling of people as to whether they are likely to commit crimes in future” is in the banned list.

This overlapping of “Unacceptable and High Risks” could lead to confusion as we go on. The overlapping suggests that we should consider the classification more on the purpose of use rather than the type of AI. Requires more debate to understand the compliance obligations arising out of the classification of AI.

The use of AI in deepfake situations is considered “No Risk” and is another area on which India needs to take a different stand.

The summary of observations is that the

1.”Banning” of certain AI systems may be disrupting innovation

2.Risk classification is unclear and overlapping.

3.Maturity of Machine Learning process is not considered for classification.

4.In classification there is a mix up of purpose of use and the nature of the  algorithm which needs clarity.

There is no doubt that legislation of this type is complex and credit is due for attempting it. India should consider improving upon it.

Reference Articles:

Clear View

Compliance Checker tool

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.