Applicability and Non-Applicability of EU-AI Act

(continuation from the previous Article)

In order to look at the compliance requirements under EU-AI act, we need to first understand what is AI, what is the role of our organization under EU-AI act and what is the classification of the AI system.

We have discussed the definition of AI in our previous article.

In this article which may need some extensions, we shall explore the roles to which the EU-AI act is applicable, which is available through Article 1 .

Under Article 2.1 the Act is applicable to 

(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;

(ca) importers and distributors of AI systems;

(cb) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(cc) authorised representatives of providers, which are not established in the Union;

(cc) affected persons that are located in the Union.

It is observed from the above scope that organizations who may be developers of AI in India whose products are accessible in EU either because the developer exports the products to importers in EU, or runs a website accessible from EU or directly operates in EU and offers the service may be coming under the Act.

Article 2.2 states that this Act is applicable in the following circumstances

For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2) related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply.

 Article 53 shall apply only insofar as the requirements for high-risk AI systems under this Regulation have been integrated under that Union harmonisation legislation.

However the subsection Article 2.3 states

“This Regulation shall not apply to areas outside the scope of EU law and in any event shall not affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States to carry out the tasks in relation to those competences.”

“This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification of such systems exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

“This Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

The subsection Article 2.4 states

” This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, under the condition that this third country or international organisations provide adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

the subsection Article 2.5 states

“This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of the European Parliament and of the Council[*] [as to be replaced by the corresponding provisions of the Digital Services Act].”

[*] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) (OJ L 178, 17.7.2000, p. 1)

The Article 2.5a states

This Regulation shall not apply to

AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

(However) Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation.

This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.

The subsection Article 2. 5b states

5b. This Regulation shall not apply to any research, testing and development activity regarding AI systems or models prior to being placed on the market or put into service; those activities shall be conducted respecting applicable Union law. The testing in real world conditions shall not be covered by this exemption.

This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.

The subsection Article 2 5c states

5c. This Regulation shall not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

The subsection Article 2 5e (5d missing) states

5e. This Regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.

The subsection Article 2 5g (5f missing) states

5g. The obligations laid down in this Regulation shall not apply to AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems or an AI system that falls under Title II and IV.

As one can observe,each of the above sub clauses need to be explored independently which we shall do so in subsequent articles.

If we look at how DGPSI tries to address similar concerns, we bank upon the legal provisions in India under ITA 2000 (Section 11) which provides that the actions of an automated system is attributable to the “Controller” of the system.

Hence under DGPSI if we identify any automated system which may be classified as AI or Generative AI etc., we try to identify whether the Data Fiduciary is in control of the means of processing. If he does not know what the AI code is doing, the AI developer or deployer is considered a “Joint Data Fiduciary” and all responsibilities for compliance of DPDPA 2023 lies with  him along with the liabilities for which the contract should provide an indemnity clause. Where the deployer is ready to disclose the code or give an assurance that the means of finance  is  exactly what the data fiduciary prescribes and is auditable by him, the deployer may be considered as a “Data Processor”.

DGPSI recommends business end-to end compliance and hence the issue of role definition for AI deployment from the perspective of the user data fiduciary should be settled at the time of approving a vendor or buying of a  software system.

Also, unlike EU-AI act which applies to both personal and non personal data, under DGPSI we look at systems which process personal data and in case of the developer or vendor we expect them to be able to certify that their AI is a “DPDPA Compliant AI”. The AI-DTS system is being developed as a system to evaluate the compliance maturity of the AI product.

(P.S: Readers would appreciate that these are pioneering concepts and are under development and continual improvement. The professional community is welcome to join hands with Naavi in developing these  concepts further.)

Naavi

Posted in Cyber Law | Leave a comment

Defining of AI: DGPSI approach

(Continued from previous article: Impact of EU AI Act on India)

EU AI Act expects that “Providers”, “Deployers” and other “Trade intermediaries” shall be bound by the law (Effective from 2026).

The compliance to EU-AI act starts with flagging a “Product” or “Service” as “AI Product/Service”. Hence the definition of what constitutes an “AI System” becomes key to the compliance.

According to Article 3(1)

‘AI system‘ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

Though the compliance is based on the “Risk Assessment”, the applicability at the foundation stage is guided by this definition. (Other aspects of applicability are who is the user and where it is being used).

The ISO 420o1 is made applicable to an “AI management system” (AIMS) but does not clearly define how an organization can identify which part of its business is AI and which is not. It adopts ISO/IEC 22989:2022 for definition which vaguely defines Artificial intelligence as

an engineered system set of methods or automated entities that together build, optimize and apply a physical, mathematical, or otherwise logical representation of a system, entity, phenomenon, process or data  so that the system can, for a given set of predefined tasks compute predictions, recommendations, or decisions”.

It is important for us to realize that at the umbrella level, all software systems are “Intelligent”. But the intelligence of a software system is limited to the instructions already embedded as the “Code”. The software therefore is a faithful servant that remembers the instructions and executes it based on the context.

An AI is a special software which has two key characteristics namely “Autonomy” and “Adaptiveness” as stated in the EU-AI definition. The ISO 42001 definition only speaks of “Predictions” and “Decisions” as terms that can be attributed to the nature of AI as different from other software.

In contrast the definitions arising from organizations like UNESCO which focus on the impact of AI on people recommend that

“AI systems are systems which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control.”

Taking these three definitions into consideration DGPSI adopts the following definition for AI.

Definition of AI under DGPSI

AI is a class of automated data processing system where the human intervention in decision output and application of decision to a business decision is below an acceptable threshold.

In order to define the threshold, three classes of AI are recognized as part of the definition.

Class 1:

Any software that has a Code correcting ability without the intervention of a human developer to generate an output is considered as an AI system.

Class 2:

Any AI system that automatically implements a decision affecting a human is considered as AI system.

Class 3:

Any system that reacts to the human emotions, capable of creative outputs, including generative AI .

This definition includes the “Autonomy” and “Adaptiveness” of the EU-AI act and “Predictiveness” , “Automated decision making” in the ISO 42001 and the “Human like behaviour” of the UNESCO definition. The sub classifications also take into account the “Risk Perceptions” such as “Unsupervised Machine Learning”, “Bias in automated decision making” and “Hallucination and rogue behaviour”.

The first task of compliance is therefore to label a software system as “AI System” using this definition.

PS: 

Under DGPSI (Digital Governance and Protection Standard of India) we define the DGPMS (Digital Governance and Protection Management System) and apply “Compliance By Design” principles. 

The DGPMS is an integration of PIMS, ISMS (Personal Data Processing) and AIMS (personal data processing). 

The AIMS under DGPMS is a sub system which is either a Joint Data Fiduciary or a Data Processor and compliance requirements are designed accordingly.

Comments are welcome.

Naavi

Posted in Cyber Law | Leave a comment

Impact of EU AI Act on India

“Extra territorial Jurisdiction” was one of the aspects of GDPR which created a scare in the Indian industry in 2018. This was also the reason for the birth of FDPPI the organization which has today grown to be a premier Privacy and Data Protection organization in India.

Today the EU-AI act is threatening to be another more potent legislation of the EU that could have an adverse impact on the Indian IT industry.

As regards GDPR, the Indian industry so far has not been much affected since it is working under the radar as “Processors” to a “Controller in EU”. The SCC which were tightened after the Schrems judgement got rationalized after the EU-US Privacy framework adoption.

However EU-AI act appears to be re-igniting the controversies of an attempt by EU to create global hegemony though the Act and it’s penalty clauses.

EU commission has established “The European AI Office” and will be the regulator for the Act. The first point we need to note about the EU-AI act is the penalty clause (Article 70).

According to Article 70,

a) non compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines upto Euro 35 million or 7% of the Worldwide turnover whichever is higher

b) non compliance of an AI system with any of the provisions other than in Article 5, as laid down in Article 16, 25, 26, 27, 29 (para 1 to 6a), Articles 33, 4(1),34(3),34 (4), 34a, and 52 may result in administrative fines upto Euro 15 million or upto 3% of worldwide turnover

c) Supply of in-corrrect, incomplete or misleading information in reply to a request may result in administrative fine upto Euro 7.5 million or 1% of total worldwide annual turnover

d) In case of SMEs the administrative fine will be the lower of the above..

Scope of the Act

Article 2 defines the scope of the Act . The essence of the scope in brief are indicated below. (An in-depth discussion of the applicability will follow)

According to article 2.1, the regulation applies to

a) Providers placing a product or service in the Union irrespective of whether they are established or located within the Union or not.

b) Deployers of AI systems that have their place of establishment or who are located within the Union

c) Providers and deployers of AI systems that have their place of establishment or location in a hird country where the output produced is used in the Union.

d) Importers and distributors of AI systems

e) Importers and distributors of AI systems

f) product manufacturers placing their product under their own name and trademark

g) authorized representatives of providers which are not established in the Union.

h) affected persons that are located in the Union

This regulation shall not apply to areas outside the scope of EU law and matters concerning national security, scientific research etc

This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of the European Parliament and of the Council [as to be replaced by the corresponding provisions of the Digital Services Act].

This Regulation shall not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

The obligations laid down in this Regulation shall not apply to AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems

Though the Act is considered as a “Risk Based” system nd technology neutral it defines the AI system as follows:

‘AI system‘ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

The general reading of the Act implies that Indian software developers placing their products and services in the EU Union either directly or through distributors etc need to be aware of the need to cover the risks.

(More discussions will follow…continued)

Naavi

Reference

EU Commission-AI

Posted in Cyber Law | Leave a comment

MOU with iLET Solutions

An MOU was signed between Ujvala Consultants Pvt Ltd as training partner of FDPPI and iLET Solutions Private Limited, an e-Learning platform to provide Learning Management solutions for the different online training programs conducted by Cyber Law College. Mr Ashok Kini, partner Klickstart Solutions and Director FDPPI (Chapter Activity Coordination Committee) and Suresh Balepur, President Bangalore Chapter of FDPPI were present during the occasion.

iLET Solutions was founded in 2018 and offers a wide range of blended learning courses for talent development and enrichment across all age groups. 

Mr Mayank Jaiswal, the Co-Founder and Director executed the MOU which enables FDPPI and Cyber Law College to host the Certification programs on the platform under the URL “Learnwyse.com” .

ILET will also host “Jnaana Bhandar” which is the video repository of FDPPI events which is part of the continuing education of the members. FDPPI will launch the “Jnaana Bhandar” as part of its “Content Membership” program where professionals can subscribe to the different videos produced during FDPPI knowledge sessions and events which should be useful reference information.

Naavi

Posted in Cyber Law | Leave a comment

Hacker’s Attack Indian National Digital Assets

Several Indian Government Entities and energy companies were targeted by unknown threat actors leading to suspected theft of sensitive data both personal and non personal.

Report in the hackernews.com suggests that the information stealing code was delivered through a phishing email concealed as an “Invitation Letter from the Indian Airforce”. Data waas exfiltrated through Slack channels.

The incident should be registered as a “Cyber Terror” activity and potential link to the current political scenario in the country should be investigated.

Naavi

Posted in Cyber Law | Leave a comment

Advertising Industry needs to wake up to the demands of DPDPA 2023..1

Naavi as part of his career development had been in the advertising industry for around 11 years and has closely participated in the activities of a full service advertising industry which creates brands, builds brands, understands consumer behaviour with research, reaches out to consumers, creates effective communication to pass on a message to the masses. Naavi’s involvement in advertising was during the period when Internet made an entry and hence advertising industry was transforming from News papers to TV medium with advertising on websites just appearing on the horizon. At that time Naavi had also thought of and pursued a patent “Adview Certification” which involved implanting an intelligent beacon on the website to monitor the behaviour of the visitors to develop a realisable metric of visitors like the TRP in TV industry or ABC (Audit Bureau of Circulation).

With this background, if we now look at the developments worldwide on “Privacy”, it appears that the digital advertising industry is one sector which has an existential threat on account of the Privacy laws. While Fintech and Health sector also have many hurdles to cross, they are to some extent manageable. But Digital Advertising industry which is at the root of all marketing activities and has to design communication appropriate to the target audience has a real uphill task  to the extent that many may feel that there is no way the industry can be fully compliant and hence the winner is the one who is good in deception.

The Data analytics industry has two parts to its activities namely analysis of anonymized data and analysis of identified personal data. Data Science industry related to anonymized data may not be affected by the privacy laws if we accept that “Anonymization of previously identifiable personal data” is similar to “Deletion” and does not require any explicit consent of the data principal. However, analysis of identifiable personal data is closely associated with “Targeted advertising” and does face the same problem as the advertising industry. In fact the data analytics of identifiable personal data and digital advertising industry work in close unison and hence their problems are similar.

To understand the issue, let us start with the simplest of simple tasks namely “Sending E Mails without prior consent” offering products or services. At present we call these as “Unsolicited emails” and “Spam”. “Causing annoyance” with repeated unsolicited emails is a punishable offence in some laws. (Also applicable to unsolicited phone calls).

Does this mean that the only way an organization can reach out to its prospective customers is through “Search Engines” and “Voluntary walk in enquiries”? . The unsolicited mobile calls are a little more annoying than unsolicited emails since mobiles calls cause a greater disturbance than the emails. However emails provide an opportunity to respond leisurely and hence are less demanding on the critical time of the receiver.

The Privacy law makers and the advertising industry have to sit together and sort out this issue and whether a polite “E Mail to request permission to send the next detailed email about the service” say once a year should be considered as a permitted one time activity.

The other points of discussion which we may discuss in continuation are..

1.Profiling a customer for the purpose of market segmentation and targeted advertising

2. “Collecting personal information through cookies set by the advertising agencies/adtech companies on the websites of companies” and consent mechanism for the same

3. “Regulation of information collected by an ad agency/adtech company through cookies from one advertising client to be used for profiling and made available to other clients”.

Internationally there is a discussion on the “Diligence Requirements for the Adtech Industry” for demonstrating lawful consent for collecting and selling personal data. (Refer article in iapp.org).

This article flags the efforts of the Interactive Advertising bureau and SafeGuard Privacy tool called IAB Digital Platform. This platform will contain a set of standardized privacy diligence questions that are specially designed for participants in the digital advertising industry. This is a good industry initiative which is required. 

Some parts of the requirements mentioned here were included in the WebDTS concept which FDPPI promoted but observed a frustratingly large number of non compliance. Perhaps in India we need the Advertising industry regulators to start looking at “Compliance to DPDPA 2023” as a requirement to be considered. At present the advertising industry and more particularly the Ad Tech companies would appear to be completely unconnected with the DPDPA 2023. The end users may escape responsibility by stating that the “Ad Service Provider is a Joint Data Fiduciary” and is responsible for compliance of DPDPA. With many of them operating on AI platforms hosted on websites of their clients and the information collected is that of the customer of a customer, there is very little possibility of “Consent” being obtained. 

While compliance activists like us keep pointing out these issues, the compliance subjects continue to feel that the compliance is “Impractical”. The Advertising industry needs to sit together and find a proper solution to this problem at the earliest.

(Comments welcome)

Naavi

Posted in Cyber Law | Leave a comment