Intersection point for EU AI Act and DGPSI: AI-DTS

(P.S: DGPSI=Digital Governance and Protection Standard of India)

The EU AI act from the compliance perspective mainly addresses the handling of AI at three specific contexts. First is the development of AI (manufacturer) and second is the deployment of AI (provider or deployer). The third category which is is the distribution of AI software. (Importer or distributor)

Article 1(a) states that the regulation applies to

providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, …,importers and distributors of AI systems, product manufacturers …and..affected persons“.

The term manufacturer has been used in EU-AI act because one of the legislative concerns is the use of AI within other products such as AI in Automobiles. Here AI may be used as part of the automated product system mainly for enhancing the quality and security of the product as different from the use of AI in privacy situation where the emphasis is “Profiling”, “Targeted Advertising”, “Behavioural Manipulation” etc.

In terms of compliance we need to look at each of the three contexts differently. Out of these contexts, the development and deployment are the key areas of compliance. The “Affected persons” are relevant from the perspective of identifying the “Harm” or “Risk” in deployment.

At the development stage, AI developer/manufacturer needs to be transparent and ensure that the algorithm is free from bias. At the same time the developer should ensure that the machine learning process uses data without infringing copyright.

When a physical product manufacturer such as an automobile manufacturer embeds an AI system for say efficient braking based on visual reading of an obstruction, he may be using it as a “Component” and the responsibility for compliance as a developer should be primarily on the AI software manufacturer though it gets transferred to the automobile manufacturer by virtue of the embedded product marketed by them. In the IT scenario such usage of embedded products are more accurately identified as “Joint Data Fiduciaires” or “Joint Data Controllers”. In the context of an automobile manufacturer, the role of the automobile manufacturer as a “Data Fiduciary” is not clearly recognized but DGPSI recognizes this difference and looks at the component as a “Data Processor” the responsibility for which is with the component manufacturer unless it is consciously taken over by the auto manufacturer.

The developer needs to establish an appropriate process for “Compliance during the process of development of an AI” which includes a proper testing document that can be shared with the deployer as part of the conformity assessment report.

At the deployment stage, the control of the AI system has been passed on to the “Deployer” and hence the role of the developer in compliance during usage is less.

But Article 61 of EU AI act prescribes the Post-Marketing Monitoring system which is required to be set up by the “Providers” to ensure compliance of EU AI act. Here EU AI act appears to use the term “Provider” from the perspective of both the developer and the deployer.

DGPSI however wants to maintain the distinction between the “Developer” and “Deployer” and build compliance separately. Under DGPSI, Both the Development monitoring process as well as the deployment monitoring process can be expressed in terms of a Data Trust Score or DTS which is the way DGPSI expresses the maturity of compliance in general.

AI-DTS-Developer and AI-DTS-Deployer could be two expressions that can be used to express the compliance.

AI deployers are the “Data Fiduciaries” under DPDPA 2023 and the compliance concern is mainly on how the personal data collected in India is being processed by the AI system.

Article 61 of the EU-AI act provides the requirement of a “Post market monitoring” by the providers for the high risk AI systems. Let us look at Article 61 as the basis for AI-DTS-Deployer.

Article 61.1 states that

“Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.”

Article 61.2 states

The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.

The narration under 61.2 indicates that the developer of AI system has to get the post-marketing feedback which is a debatable prescription.

It appears that through this prescription, the EU-AI act is legitimizing the installation of backdoor by the developer.

Under DGPSI we reject this suggestion and identify the responsibilities of the developer separately from that of the deployer. It is open for them to determine whether they will be “Joint Data Fiduciaries” sharing the compliance responsibilities or that the deployer takes over the responsibilities all by himself.

This is a key aspect of difference between the compliance requirements of EU-AI act and the approach of DGPSI as a framework of compliance. It is open for ISO 42001 to adopt the EU-AI act as it is expected to do but DGPSI will keep up the distinction which we consider will be flexible enough to consider that the “AI Backdoor” is a legitimate prescription but with the “Consent of the Deployer”.

This requires a full scale debate…

Naavi

P.S: This debate is to develop Privacy Jurisprudence and request experts to consider this as a brainstorming debate and add their positive thoughts to guide the law makers in India to develop a better AI act than the EU-AI act. 

Posted in Cyber Law | Leave a comment

Conformity Assessment-Article 11 of EU AI Act

The Article 11 of the EU AI act states that there shall be a “Technical Documentation” of high risk AI system before that system is placed on the market or put into service and shall be kept up to date.

This is a document that a “Deployer” should obtain from the developer or supplier of the AI algorithm as part of the compliance requirements.

Under DGPSI*, which considers the AI algorithm supplier as a Joint Data Fiduciary of the deploying company, the deployer needs to obtain an undertaking from the supplier a conformity statement as part of the Contract and also assume liability for any non compliance of DPDP 2023.

EU AI act prescribes the format for documentation which requires the following format of documentation (Annex IV) which is also relevant for DGPSI compliance.

1.A general description of the AI system (including the purpose of usage)

2.Detailed description of the elements of the AI system and the process for its development (Applicable to the developers and including documented test process)

3.Detailed information about monitoring, functioning and control of the AI system

4.description of the appropriateness of the performance metrics for the specific AI system

5.Description of the relevant changes made by the provider to the system through its lifecycle

6.List of harmonized standards applied in full or in part.#

7. A Copy of the EU declaration of Conformity##

8. A detailed description of the system in place to evaluate the AI system performance in the post market monitoring plan.###

#List of Union harmonization legislation as per Annex II includes GDPR and other industry regulations where AI may be used as part of the system. In the Indian context this includes the ITA 2000 and the AI advisory.

# #DPDPA Declaration of Compliance

###In EU AI act, providers need to establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system. (Ref Article 61).

US has called this “Process Controller” as Chief AI officer which is mandatory for federal agencies.

In the Indian context this is included in the AI policy managed by the DPO with the “Process Controller” under the distributed responsibility policy.

Under DGPSI the highlighted points are key to compliance with a modification that point no 5,6 ,and 7  should refer to DPDPA Compliance and point number 8 to the measures undertaken by the deploying Data Fiduciary. Points 2 and 3 are more relevant for compliance in the developer eco system.

*PS: DGPSI or Digital Governance and Protection Standard of India is the indigenous framework developed by FDPPI/Naavi for compliance of DPDPA along with ITA 2000 and BIS draft standard of Data Protection.

 

(…to be continued)

Naavi

Posted in Cyber Law | Leave a comment

“Conformity Assessment” under EU-AI act

EU AI act introduces a new terminology “Conformity Assessment” to mean a “Compliance Assessment”. In GDPR the term used was Privacy by Default. DGPSI uses a term “Compliance By default”. Now “Conformity Assessment” stands for an assurance certification of an AI system that indicates whether the requirements of Title III, Chapter 2 relating to the high risk AI system have been fulfilled.

Article 3(20) defines ‘conformity assessment’ as “the process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled”

This covers the following articles:

Article 8- Compliance with the requirements

Article 9: Risk Management System

Article 10: Data and Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and provision of information to deployers

Article 14: Human oversight

Article 15: Accuracy, robustness and cyber security

Let us explore this further in the next article.

Naavi

Posted in Cyber Law | Leave a comment

Rameshwaram Cafe Blast.. Responsibility of the Telecom Company

It has been reported that in the Rameshwaram Cafe blast, one person who had bought a SIM card/second hand mobile from a shop was questioned since his number was involved in the communication related to the blast.

The seller of the mobile  has since been cleared and it has been identified that the SIM card buyer had misused the credentials of some other person to create fake ID and used it on the second hand mobile. A similar incident had occurred a few years back when a property owner in Bangalore had been falsely accused in a terror case because a fake Adhaar card had been issued in his name and used by the terrorist.

When such frauds occur, the dealer who created the fake ID becomes an accomplice and needs to be punished. At the same time, the telecom company which appointed the dealer is also liable for the same offence.

The offence comes under ITA 2000 under different sections such as Section 66,66C,66D,66F , Sec 43 etc. The same offence gets recognized under DPDPA 2023 as a failure of compliance for which penalties may be imposed. (When the act becomes fully operative).

In some of these cases, the telecom operator who may be Vodafone or Airtel or JIO etc provides two kinds of defences stating that it had followed “Reasonable Security Practices” under Section 43A and also that it could be considered as an “Intermediary” and protected from liability under Section 79 of ITA 2000.

In some cases the companies indicate that they had taken an ISO 27001 certificate which should be treated as a “Deemed Compliance of Section 43A”.

In this context, I would like to state my views why telecom companies need not be complacent that they have ISO 27001 certification and it can protect against being held liable under Section 43A or 43 or 85A and other sections of ITA 2000 both for civil penalties and criminal punishments of the executives .

In the ICICI Bank vs S Umashankar case, (para 8-15, page 9-16) the TDSAT in the appeal held that if security practices are not followed, Section 43(g) may be applied for “Facilitating the contravention through negligence” against the company (In that case it was the Bank but the principle is applicable for the telecom company for negligence in SIM Card issue).

Whether having an ISO 27001 certificate is an adequate security practice did not come for discussion in the Umashankar case. However in another case in TDSAT recently this discussion has come to the fore in the issue of SIM Card at the retail store/agent .

Since the ISO certificate was on a different system and a different date, it had no relation to the SIM Card issue process. At the same time since SIM card “Activation” is done only by the authorized official, the retail store agent is only a contractor to verify the KYC documents and recommend activation. Hence the telecom company cannot claim the “Intermediary” status. Also, the KYC information is not an “Intermediary’s data” but is the “Data of the telecom operator for its own consumption” and hence cannot provide the intermediary status to the telecom company under Section 2(1)(w) of ITA 2000.

It is further stated that from 1st December 2023, introduced new rules stating that all customers applying for new or replacement of SIM has to go through the KYC process.

“The guidelines also state that all telecom operators are now required to register their franchises, PoS agents, and distributors. Further, they will have to undergo verification. Failure to comply will result in a fine of Rs 10 lakhs. Point-of-Sale (PoS) agents must register themselves through a written agreement with licensees. Existing PoS agents have a 12-month window to align with the new registration process specified by licensees.

This measure aims to eliminate rogue PoS agents who engage in fraudulent practices, such as issuing SIM cards to antisocial or anti-national elements. The government has instructed that any existing PoS agents engaging in illegal activities will face termination and a three-year blacklist.”

It should therefore be one of the compliance requirements of every telecom operator to ensure that the POS agent displays the registration document that indicates that he is an authorized agent to issue SIM cards. 

Further, the mobile customers can check from time to time the number of SIM cards linked to them by verifying their number in https://tafcop.sancharsaathi.gov.in/telecomUser/

Currently upto 9 SIM cards are issued to a single person and bulk cards issue for companies are issued through an authorized signatory to be registered by the organizations with DOT.

PS: It is possible that most of the telecom companies have not introduced security measures as envisaged in the December 1, 2023 guideline and compliance auditors need to ensure that they specially check a sample of the retail stores to ensure that proper systems are in place at the SIM card issuing outlets.

Naavi

Posted in Cyber Law | Leave a comment

Classification of AI under EU AI act

(Continuation of Previous Article)

Having discussed the definition of AI and the applicability of EU AI Act in broad terms in the two previous articles, let us continue our discussion on “Classification of AI” under EU AI Act which is important from the point of view of “Risk Assessment”.

For the purpose of compliance EU AI Act defines AI systems with the following classification

1.Prohibited Systems (Title II-Article 5)

2.High Risk Systems( Title III-Article 6,7)

3. Limited Risk Systems

4. Minimal Risk systems

5.General Purpose AI Model (Title VIIIA, Article 52a)

This classification is based on “Risk Assessment”. Prohibited systems are those AI systems which present an “Unacceptable Risk”. In making the assessment of “Risk” one needs to look at the “Harm” caused to the end users of the AI systems namely the people.

As per Article 1.1,

the purpose of this Regulation is to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation.

The term “Harm” under the Act therefore includes all adverse effect on “Functioning of internal market” besides “Promoting human centric and trustworthy artificial intelligence”. Any thing that affects the protection of health, safety, fundamental rights, rule of law, environmental protection etc will be considered as “harm” under the Act.

When we apply the AI law to “Personal Data Protection” we look at only the harm caused to the individuals. But EU Act appears to expand it’s scope to the economic environment and more particularly the EU Geographic space.

This also means that the “Extra Territorial” application of the penalty clauses are also limited to the adverse impact that may be caused within the Union. Hence if there is an AI system in India that does not impact the EU, the compliance to EU-AI act is redundant. Hence if organizations need to consider ISO 42001 certification for their AI systems which have a foot print only in India, it may be considered as redundant. What is more relevant is the compliance to ITA 2000/DPDPA which is addressed by a DGPSI audit and not ISO 42001 audit.

Now we shall explore Article 5 which defines the “Unacceptable Risks” or “Prohibited AI practices”.

According to Article 5.1, the unacceptable risks include

(1)AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm.

(2)AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(3)use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. (This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;)

(4)AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(5)the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex IIa and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. This paragraph is without prejudice to the provisions in Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement.

(6)use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics. (This prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;)

(7) use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(8)use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

It may be observed that the Act provides several exemptions for the use of the prohibited systems by law enforcement authorities. Such use may however may be subject to some conditions and required to be reported to the Commission in annual reports.

Let us continue our discussion on other classifications of Risk based AI systems in the next article.

Naavi

 

Posted in Cyber Law | Tagged | Leave a comment

Applicability and Non-Applicability of EU-AI Act

(continuation from the previous Article)

In order to look at the compliance requirements under EU-AI act, we need to first understand what is AI, what is the role of our organization under EU-AI act and what is the classification of the AI system.

We have discussed the definition of AI in our previous article.

In this article which may need some extensions, we shall explore the roles to which the EU-AI act is applicable, which is available through Article 1 .

Under Article 2.1 the Act is applicable to 

(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;

(ca) importers and distributors of AI systems;

(cb) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(cc) authorised representatives of providers, which are not established in the Union;

(cc) affected persons that are located in the Union.

It is observed from the above scope that organizations who may be developers of AI in India whose products are accessible in EU either because the developer exports the products to importers in EU, or runs a website accessible from EU or directly operates in EU and offers the service may be coming under the Act.

Article 2.2 states that this Act is applicable in the following circumstances

For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2) related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply.

 Article 53 shall apply only insofar as the requirements for high-risk AI systems under this Regulation have been integrated under that Union harmonisation legislation.

However the subsection Article 2.3 states

“This Regulation shall not apply to areas outside the scope of EU law and in any event shall not affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States to carry out the tasks in relation to those competences.”

“This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification of such systems exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

“This Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”

The subsection Article 2.4 states

” This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, under the condition that this third country or international organisations provide adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.

the subsection Article 2.5 states

“This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of the European Parliament and of the Council[*] [as to be replaced by the corresponding provisions of the Digital Services Act].”

[*] Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) (OJ L 178, 17.7.2000, p. 1)

The Article 2.5a states

This Regulation shall not apply to

AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.

(However) Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation.

This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.

The subsection Article 2. 5b states

5b. This Regulation shall not apply to any research, testing and development activity regarding AI systems or models prior to being placed on the market or put into service; those activities shall be conducted respecting applicable Union law. The testing in real world conditions shall not be covered by this exemption.

This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.

The subsection Article 2 5c states

5c. This Regulation shall not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

The subsection Article 2 5e (5d missing) states

5e. This Regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.

The subsection Article 2 5g (5f missing) states

5g. The obligations laid down in this Regulation shall not apply to AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems or an AI system that falls under Title II and IV.

As one can observe,each of the above sub clauses need to be explored independently which we shall do so in subsequent articles.

If we look at how DGPSI tries to address similar concerns, we bank upon the legal provisions in India under ITA 2000 (Section 11) which provides that the actions of an automated system is attributable to the “Controller” of the system.

Hence under DGPSI if we identify any automated system which may be classified as AI or Generative AI etc., we try to identify whether the Data Fiduciary is in control of the means of processing. If he does not know what the AI code is doing, the AI developer or deployer is considered a “Joint Data Fiduciary” and all responsibilities for compliance of DPDPA 2023 lies with  him along with the liabilities for which the contract should provide an indemnity clause. Where the deployer is ready to disclose the code or give an assurance that the means of finance  is  exactly what the data fiduciary prescribes and is auditable by him, the deployer may be considered as a “Data Processor”.

DGPSI recommends business end-to end compliance and hence the issue of role definition for AI deployment from the perspective of the user data fiduciary should be settled at the time of approving a vendor or buying of a  software system.

Also, unlike EU-AI act which applies to both personal and non personal data, under DGPSI we look at systems which process personal data and in case of the developer or vendor we expect them to be able to certify that their AI is a “DPDPA Compliant AI”. The AI-DTS system is being developed as a system to evaluate the compliance maturity of the AI product.

(P.S: Readers would appreciate that these are pioneering concepts and are under development and continual improvement. The professional community is welcome to join hands with Naavi in developing these  concepts further.)

Naavi

Posted in Cyber Law | Leave a comment