Applicability and Non-Applicability of EU-AI Act

(continuation from the previous Article)

In order to look at the compliance requirements under EU-AI act, we need to first understand what is AI, what is the role of our organization under EU-AI act and what is the classification of the AI system.

We have discussed the definition of AI in our previous article.

In this article which may need some extensions, we shall explore the roles to which the EU-AI act is applicable, which is available through Article 1 .

Under Article 2.1 the Act is applicable to 

(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;

(ca) importers and distributors of AI systems;

(cb) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(cc) authorised representatives of providers, which are not established in the Union;

(cc) affected persons that are located in the Union.

It is observed from the above scope that organizations who may be developers of AI in India whose products are accessible in EU either because the developer exports the products to importers in EU, or runs a website accessible from EU or directly operates in EU and offers the service may be coming under the Act.

Article 2.2 states that this Act is applicable in the following circumstances

For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2) related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply.

 Article 53 shall apply only insofar as the requirements for high-risk AI systems under this Regulation have been integrated under that Union harmonisation legislation.

However the subsection Article 2.3 states

“This Regulation shall not apply to areas outside the scope of EU law and in any event shall not affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States to carry out the tasks in relation to those competences.”

“This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification of such systems exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

“This Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

The subsection Article 2.4 states

” This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, under the condition that this third country or international organisations provide adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

the subsection Article 2.5 states

“This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of the European Parliament and of the Council[*] [as to be replaced by the corresponding provisions of the Digital Services Act].”

[*] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) (OJ L 178, 17.7.2000, p. 1)

The Article 2.5a states

This Regulation shall not apply to

AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

(However) Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation.

This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.

The subsection Article 2. 5b states

5b. This Regulation shall not apply to any research, testing and development activity regarding AI systems or models prior to being placed on the market or put into service; those activities shall be conducted respecting applicable Union law. The testing in real world conditions shall not be covered by this exemption.

This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.

The subsection Article 2 5c states

5c. This Regulation shall not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

The subsection Article 2 5e (5d missing) states

5e. This Regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.

The subsection Article 2 5g (5f missing) states

5g. The obligations laid down in this Regulation shall not apply to AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems or an AI system that falls under Title II and IV.

As one can observe,each of the above sub clauses need to be explored independently which we shall do so in subsequent articles.

If we look at how DGPSI tries to address similar concerns, we bank upon the legal provisions in India under ITA 2000 (Section 11) which provides that the actions of an automated system is attributable to the “Controller” of the system.

Hence under DGPSI if we identify any automated system which may be classified as AI or Generative AI etc., we try to identify whether the Data Fiduciary is in control of the means of processing. If he does not know what the AI code is doing, the AI developer or deployer is considered a “Joint Data Fiduciary” and all responsibilities for compliance of DPDPA 2023 lies with  him along with the liabilities for which the contract should provide an indemnity clause. Where the deployer is ready to disclose the code or give an assurance that the means of finance  is  exactly what the data fiduciary prescribes and is auditable by him, the deployer may be considered as a “Data Processor”.

DGPSI recommends business end-to end compliance and hence the issue of role definition for AI deployment from the perspective of the user data fiduciary should be settled at the time of approving a vendor or buying of a  software system.

Also, unlike EU-AI act which applies to both personal and non personal data, under DGPSI we look at systems which process personal data and in case of the developer or vendor we expect them to be able to certify that their AI is a “DPDPA Compliant AI”. The AI-DTS system is being developed as a system to evaluate the compliance maturity of the AI product.

(P.S: Readers would appreciate that these are pioneering concepts and are under development and continual improvement. The professional community is welcome to join hands with Naavi in developing these  concepts further.)

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.