Should we start DPDPA Compliance today?

Five most frequent queries we receive in the market from companies today are

1. Is DPDPA 2023 is effective today or should we wait for the notification?

2. Should I start my compliance program today or wait till the rules are notified?

3.How long will the implementation typically take?

4. If we want to start a DPDPA compliance program what is the right framework to adopt?

5. Who has to lead the implementation in a company?

Let me try to add my views on each of the above queries.

1. Is DPDPA 2023 is effective today or should we wait for the notification?

DPDPA (Digital Personal Data Protection Act 2023) was passed by the Parliament and the relevant gazette notification was issued on August 11, 2023 with the President signing the Bill into an Act.

However, one of the sections of DPDPA 2023 (section 1.2) states that the Act shall come into force on such date as the Central Government may, by notification in the Official Gazette, appoint and different dates may be appointed for different provisions of this Act and any reference in any such provision to the commencement of this Act shall be construed as a reference to the coming into force of that provision.

One school of thought is that since the notification has not yet come, the Act is not yet in force.

This view cannot be brushed aside strictly from the legal perspective.

However, a prudent corporate entity does not wait for the penalty notice to be delivered to them or an arrest warrant is issued before taking steps to be compliant.

Compliance to DPDPA 2023 is a “Risk Management Measure” for all Companies and more so the Board, the Independent Directors, the CEO, CRO and CFO to recognize the possible impact of non compliance.

We can procrastinate and say “Let the rules be notified”, “Let the DPB be appointed”, “Let a Breach happen”, “Let me receive a notice” etc… and then say I can challenge the notice in a Court and escape.

But is this the wise strategy for a corporate entity?..one needs to ponder.

It must be remembered that DPDPA 2023 is not a completely new legislation as many may think. It is a continuation of ITA 2000/8 since one of the sections 43A is being replaced with the new Act. Section 43A expects companies handling sensitive personal data to follow a reasonable security practice and the “Reasonable” includes “Due Diligence” which is acting in a manner which is considered a best industry practice from the perspective of both the law which is effective today and the law which is pending notification of a date from which penalties may become effective.

Even otherwise, ITA 2000 has Sections 43 along with Section 66, Section 72A as well as Sections 66C and 66D read with Section 85 all of which may impose both civil and criminal penalties on the persons in charge of business in a company, the Directors, the Company Secretary etc.

ITA 2000 already has a regulatory mechanism which includes the Adjudicating officer under Section 46, the CERT IN under Section 70A and 70B and the Police. Adjudicator can impose penalties, CERT In can impose penalties and also recommend prosecution and the Police can start prosecution in case there is a breach of data.

DPDPA can only be considered different in the fact that liabilities under ITA 2000 may fructify after a breach has taken place while penalties under DPDPA can be imposed in many cases even if there is no breach. ITA 2000 is however more risky in another angle since any action under ITA 2000 could lead to imprisonment of corporate executives which DPDPA 2023 does not contemplate.

Even after DPDPA 2023 comes into existence, ITA 2000 will not vanish and hence some of the liabilities under ITA 2000 may still be relevant for the companies.

Those companies who donot flag these risks today are probably those who will face the wrath of law on a later day.

I therefore consider that wise managements need to treat that the DPDPA 2023 is in principle, effective as of date.

2. Should I start my compliance program today or wait till the rules are notified?

Compliance is a journey and the earlier one starts, better it is. Even before the first controls are in place an organization needs to “Discover” the covered data and have the necessary classification.

Consent need to be obtained from legacy data principals and any delay will only add to the legacy personal data which is not in conformity with the DPDPA 2023. The previous consents obtained on the basis of our understanding of GDPR or under the guidance of earlier privacy consultants may not suffice for the compliance of the new law.

A wise corporate executive will therefore start the compliance today and make necessary updations when the rules are notified. Such updations are a routine requirement and will continue.

3.How long will the implementation typically take?

It is difficult to say how much time it takes to achieve compliance. Normally it takes not less than 3 months for a medium sized company to take care of the basic requirements. Satisfactory implementation may take a further 6 months. The actual time depends on the size and operations of the organization.

4. If we want to start a DPDPA compliance program what is the right framework to adopt?

At present the only framework that is designed to meet the DPDPA compliance is DGPSI (Digital Governance and Protection Standard of India) developed by the professionals of FDPPI.

ISO 27701 which was developed for GDPR is not suitable for DPDPA compliance and no other framework is available.

DGPSI is a combination of compliance of DPDPA 2023, ITA 2000/8 as well as the draft BIS standard for Data Protection (Released on August 10, 2023). The book “Guardians of Privacy-Comprehensive Handbook on DDPDPA and DGPSI” would be the starting point for the journey to understand DGPSI. Getting certified with FDPPI as Certified DPO and Data Auditor (C.DPO.DA) is the next step.

5. Who has to lead the implementation in a company?

Most Indian companies donot have a DPO at present and some of them have designated their CISO as the DPO. DPO is the designated person in a company who needs to assume the leadership for DPDPA compliance. Small companies which are not “Significant Data Fiduciaries” need not have a designated DPO but may designate one suitable person as a “DPDPA Compliance officer”.

However, the DGPSI recognizes that compliance of DPDPA is an enterprise level responsibility and hence the implementation responsibility has to be shared. The Apex Governance committee consisting of different stake holders and the policy of “Distributed Responsibility” suggested in DGPSI makes the implementation a joint responsibility of the Governance team though DPO remains the leader.

The starting point for organizations today may actually be from the CFO and CRO who has to flag the risk of penalty and start working on Cyber Insurance and appointment of a DPO.

The lead therefore is with the Board of the Company which should do a quick business impact analysis and decide how they should move ahead with compliance.

I welcome any queries on the above and happy to debate any disagreements.

Naavi

Posted in Cyber Law | Leave a comment

Generative AI and EU AI Act

One of the major concerns of the society regarding AI is the “Dis-intermediation of human beings from the decision process”. There is the risk of AI system becoming sentient at some point of time in the future and will remain the long term risk.

In the mid term and short term AI is already posing risk of “Biased outputs due to faulty machine training ” “Automated Decision Making”, “Poisoning of Training models”, “Behavioural Manipulations”, “Neuro-Rights Violations” , ” Destruction of the Right of Choice of a human etc”.

One of the specific areas of concern is the development of large language models with the functionality of predicting the action of a user with a creative license. The creative license leads to “Hallucination” and “Rogue behaviour” of a LLM like ChatGPT or BARD and could create more problems when such software is embedded into humanoid robots.

Industrial robots on the other hand are less prone to such rogue behaviour on their own (except when they are hacked) since the creative license given to an industrial robot is less and they operate in the ANI area.

In India the use of AI to generate “Deep Fakes” and “Fake news ” is already in vogue. There is a large scale feeding of false data into the internet with the hope that it would poison the learning systems which parse information from the internet resources like websites, blogs, instagrams, X, etc. There are many declared and undeclared “Parody” accounts which boldly state falsehood which a casual observer may consider as true. The sole purpose of such accounts and content is to poison the learning systems that extract public data and incorporate it into news delivery systems. Many AI systems operate to generate content for such fake X accounts so that AI develops false information that further feeds back into the training data and generates further fake news.

Unfortunately the Indian Supreme Court dancing to the tune of anti national lobby frustrated the efforts of the Government to call out fake narrative even when such fake narrative is in the name of Ministries and Government departments.

The EU-AI act recognizes the risk of Generative AI and identifies them as a “High Risk” AI by underscoring “High Impact capabilities” and “Systemic risk at Union level”.

Even under prohibited AI practices, EU AI act includes such AI systems that deploy

“subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;”

Many of the LLMs could be posing such risks through their predictive generation of content either as a text or speech. “Deepfake” per-se (for fun) may not be classified as “High Risk” under the EU AI act but tagged with the usage, deep fake can be considered as “High Risk” or “Prohibited Risk”.

Title VIIIA specifically addresses General Purpose AI models. The compliance measures related to these impact the developers more than the deployers deployers would be relying upon he conformity assessment assurances given by the developers.

In respect of AI systems which are already in the market and have not been classified as high risk AI systems but are modified by an intermediary to be considered as a high risk AI system, the intermediary will himself be considered as the “provider” (developer) .

1.A general purpose AI model shall be classified as general-purpose AI model with systemic risk if it meets any of the following criteria:
(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;
(b) based on a decision of the Commission, ex officio or following a qualified alert by the scientific panel that a general purpose AI model has capabilities or impact equivalent to those of point (a).

2.A general purpose AI model shall be presumed to have high impact capabilities pursuant to point a) of paragraph 1 when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25.*

According to Article 52(b)

Where a general purpose AI model meets the requirements referred to in points (a) of Article 52a(1), the relevant provider shall notify the Commission without delay and in any event within 2 weeks after those requirements are met or it becomes known that these requirements will be met. That notification shall include the information necessary to demonstrate that the relevant requirements have been met. If the Commission becomes aware of a general purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.

The Commission shall ensure that a list of general purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law.

Article 52c provides the obligations for providers of general purpose AI models which may be relevant to such providers and the persons who build their products on top of such products and market under their brand name. (We are skipping further discussion on this since we are focussing on the user’s compliance requirement for the time being).

It may however be noted that the MeitY advisory of March 4th reproduced below also requires notification to Meity and registration of the person accountable for the Generative AI software.

This notification has been made under ITA 2000 as an Intermediary guideline treating the deployer of the AI as an intermediary.

Naavi

(More to follow…)

Reference: *

A floating-point operation is any mathematical operation (such as +, -, *, /) or assignment that involves floating-point numbers (as opposed to binary integer operations).

Floating-point numbers have decimal points in them. The number 2.0 is a floating-point number because it has a decimal in it. The number 2 (without a decimal point) is a binary integer.

Floating-point operations involve floating-point numbers and typically take longer to execute than simple binary integer operations. For this reason, most embedded applications avoid wide-spread usage of floating-point math in favor of faster, smaller integer operations.

Posted in Cyber Law | Leave a comment

Intersection point for EU AI Act and DGPSI: AI-DTS

(P.S: DGPSI=Digital Governance and Protection Standard of India)

The EU AI act from the compliance perspective mainly addresses the handling of AI at three specific contexts. First is the development of AI (manufacturer) and second is the deployment of AI (provider or deployer). The third category which is is the distribution of AI software. (Importer or distributor)

Article 1(a) states that the regulation applies to

providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, …,importers and distributors of AI systems, product manufacturers …and..affected persons“.

The term manufacturer has been used in EU-AI act because one of the legislative concerns is the use of AI within other products such as AI in Automobiles. Here AI may be used as part of the automated product system mainly for enhancing the quality and security of the product as different from the use of AI in privacy situation where the emphasis is “Profiling”, “Targeted Advertising”, “Behavioural Manipulation” etc.

In terms of compliance we need to look at each of the three contexts differently. Out of these contexts, the development and deployment are the key areas of compliance. The “Affected persons” are relevant from the perspective of identifying the “Harm” or “Risk” in deployment.

At the development stage, AI developer/manufacturer needs to be transparent and ensure that the algorithm is free from bias. At the same time the developer should ensure that the machine learning process uses data without infringing copyright.

When a physical product manufacturer such as an automobile manufacturer embeds an AI system for say efficient braking based on visual reading of an obstruction, he may be using it as a “Component” and the responsibility for compliance as a developer should be primarily on the AI software manufacturer though it gets transferred to the automobile manufacturer by virtue of the embedded product marketed by them. In the IT scenario such usage of embedded products are more accurately identified as “Joint Data Fiduciaires” or “Joint Data Controllers”. In the context of an automobile manufacturer, the role of the automobile manufacturer as a “Data Fiduciary” is not clearly recognized but DGPSI recognizes this difference and looks at the component as a “Data Processor” the responsibility for which is with the component manufacturer unless it is consciously taken over by the auto manufacturer.

The developer needs to establish an appropriate process for “Compliance during the process of development of an AI” which includes a proper testing document that can be shared with the deployer as part of the conformity assessment report.

At the deployment stage, the control of the AI system has been passed on to the “Deployer” and hence the role of the developer in compliance during usage is less.

But Article 61 of EU AI act prescribes the Post-Marketing Monitoring system which is required to be set up by the “Providers” to ensure compliance of EU AI act. Here EU AI act appears to use the term “Provider” from the perspective of both the developer and the deployer.

DGPSI however wants to maintain the distinction between the “Developer” and “Deployer” and build compliance separately. Under DGPSI, Both the Development monitoring process as well as the deployment monitoring process can be expressed in terms of a Data Trust Score or DTS which is the way DGPSI expresses the maturity of compliance in general.

AI-DTS-Developer and AI-DTS-Deployer could be two expressions that can be used to express the compliance.

AI deployers are the “Data Fiduciaries” under DPDPA 2023 and the compliance concern is mainly on how the personal data collected in India is being processed by the AI system.

Article 61 of the EU-AI act provides the requirement of a “Post market monitoring” by the providers for the high risk AI systems. Let us look at Article 61 as the basis for AI-DTS-Deployer.

Article 61.1 states that

“Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.”

Article 61.2 states

The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.

The narration under 61.2 indicates that the developer of AI system has to get the post-marketing feedback which is a debatable prescription.

It appears that through this prescription, the EU-AI act is legitimizing the installation of backdoor by the developer.

Under DGPSI we reject this suggestion and identify the responsibilities of the developer separately from that of the deployer. It is open for them to determine whether they will be “Joint Data Fiduciaries” sharing the compliance responsibilities or that the deployer takes over the responsibilities all by himself.

This is a key aspect of difference between the compliance requirements of EU-AI act and the approach of DGPSI as a framework of compliance. It is open for ISO 42001 to adopt the EU-AI act as it is expected to do but DGPSI will keep up the distinction which we consider will be flexible enough to consider that the “AI Backdoor” is a legitimate prescription but with the “Consent of the Deployer”.

This requires a full scale debate…

Naavi

P.S: This debate is to develop Privacy Jurisprudence and request experts to consider this as a brainstorming debate and add their positive thoughts to guide the law makers in India to develop a better AI act than the EU-AI act. 

Posted in Cyber Law | Leave a comment

Conformity Assessment-Article 11 of EU AI Act

The Article 11 of the EU AI act states that there shall be a “Technical Documentation” of high risk AI system before that system is placed on the market or put into service and shall be kept up to date.

This is a document that a “Deployer” should obtain from the developer or supplier of the AI algorithm as part of the compliance requirements.

Under DGPSI*, which considers the AI algorithm supplier as a Joint Data Fiduciary of the deploying company, the deployer needs to obtain an undertaking from the supplier a conformity statement as part of the Contract and also assume liability for any non compliance of DPDP 2023.

EU AI act prescribes the format for documentation which requires the following format of documentation (Annex IV) which is also relevant for DGPSI compliance.

1.A general description of the AI system (including the purpose of usage)

2.Detailed description of the elements of the AI system and the process for its development (Applicable to the developers and including documented test process)

3.Detailed information about monitoring, functioning and control of the AI system

4.description of the appropriateness of the performance metrics for the specific AI system

5.Description of the relevant changes made by the provider to the system through its lifecycle

6.List of harmonized standards applied in full or in part.#

7. A Copy of the EU declaration of Conformity##

8. A detailed description of the system in place to evaluate the AI system performance in the post market monitoring plan.###

#List of Union harmonization legislation as per Annex II includes GDPR and other industry regulations where AI may be used as part of the system. In the Indian context this includes the ITA 2000 and the AI advisory.

# #DPDPA Declaration of Compliance

###In EU AI act, providers need to establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system. (Ref Article 61).

US has called this “Process Controller” as Chief AI officer which is mandatory for federal agencies.

In the Indian context this is included in the AI policy managed by the DPO with the “Process Controller” under the distributed responsibility policy.

Under DGPSI the highlighted points are key to compliance with a modification that point no 5,6 ,and 7  should refer to DPDPA Compliance and point number 8 to the measures undertaken by the deploying Data Fiduciary. Points 2 and 3 are more relevant for compliance in the developer eco system.

*PS: DGPSI or Digital Governance and Protection Standard of India is the indigenous framework developed by FDPPI/Naavi for compliance of DPDPA along with ITA 2000 and BIS draft standard of Data Protection.

 

(…to be continued)

Naavi

Posted in Cyber Law | Leave a comment

“Conformity Assessment” under EU-AI act

EU AI act introduces a new terminology “Conformity Assessment” to mean a “Compliance Assessment”. In GDPR the term used was Privacy by Default. DGPSI uses a term “Compliance By default”. Now “Conformity Assessment” stands for an assurance certification of an AI system that indicates whether the requirements of Title III, Chapter 2 relating to the high risk AI system have been fulfilled.

Article 3(20) defines ‘conformity assessment’ as “the process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled”

This covers the following articles:

Article 8- Compliance with the requirements

Article 9: Risk Management System

Article 10: Data and Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and provision of information to deployers

Article 14: Human oversight

Article 15: Accuracy, robustness and cyber security

Let us explore this further in the next article.

Naavi

Posted in Cyber Law | Leave a comment

Rameshwaram Cafe Blast.. Responsibility of the Telecom Company

It has been reported that in the Rameshwaram Cafe blast, one person who had bought a SIM card/second hand mobile from a shop was questioned since his number was involved in the communication related to the blast.

The seller of the mobile  has since been cleared and it has been identified that the SIM card buyer had misused the credentials of some other person to create fake ID and used it on the second hand mobile. A similar incident had occurred a few years back when a property owner in Bangalore had been falsely accused in a terror case because a fake Adhaar card had been issued in his name and used by the terrorist.

When such frauds occur, the dealer who created the fake ID becomes an accomplice and needs to be punished. At the same time, the telecom company which appointed the dealer is also liable for the same offence.

The offence comes under ITA 2000 under different sections such as Section 66,66C,66D,66F , Sec 43 etc. The same offence gets recognized under DPDPA 2023 as a failure of compliance for which penalties may be imposed. (When the act becomes fully operative).

In some of these cases, the telecom operator who may be Vodafone or Airtel or JIO etc provides two kinds of defences stating that it had followed “Reasonable Security Practices” under Section 43A and also that it could be considered as an “Intermediary” and protected from liability under Section 79 of ITA 2000.

In some cases the companies indicate that they had taken an ISO 27001 certificate which should be treated as a “Deemed Compliance of Section 43A”.

In this context, I would like to state my views why telecom companies need not be complacent that they have ISO 27001 certification and it can protect against being held liable under Section 43A or 43 or 85A and other sections of ITA 2000 both for civil penalties and criminal punishments of the executives .

In the ICICI Bank vs S Umashankar case, (para 8-15, page 9-16) the TDSAT in the appeal held that if security practices are not followed, Section 43(g) may be applied for “Facilitating the contravention through negligence” against the company (In that case it was the Bank but the principle is applicable for the telecom company for negligence in SIM Card issue).

Whether having an ISO 27001 certificate is an adequate security practice did not come for discussion in the Umashankar case. However in another case in TDSAT recently this discussion has come to the fore in the issue of SIM Card at the retail store/agent .

Since the ISO certificate was on a different system and a different date, it had no relation to the SIM Card issue process. At the same time since SIM card “Activation” is done only by the authorized official, the retail store agent is only a contractor to verify the KYC documents and recommend activation. Hence the telecom company cannot claim the “Intermediary” status. Also, the KYC information is not an “Intermediary’s data” but is the “Data of the telecom operator for its own consumption” and hence cannot provide the intermediary status to the telecom company under Section 2(1)(w) of ITA 2000.

It is further stated that from 1st December 2023, introduced new rules stating that all customers applying for new or replacement of SIM has to go through the KYC process.

“The guidelines also state that all telecom operators are now required to register their franchises, PoS agents, and distributors. Further, they will have to undergo verification. Failure to comply will result in a fine of Rs 10 lakhs. Point-of-Sale (PoS) agents must register themselves through a written agreement with licensees. Existing PoS agents have a 12-month window to align with the new registration process specified by licensees.

This measure aims to eliminate rogue PoS agents who engage in fraudulent practices, such as issuing SIM cards to antisocial or anti-national elements. The government has instructed that any existing PoS agents engaging in illegal activities will face termination and a three-year blacklist.”

It should therefore be one of the compliance requirements of every telecom operator to ensure that the POS agent displays the registration document that indicates that he is an authorized agent to issue SIM cards. 

Further, the mobile customers can check from time to time the number of SIM cards linked to them by verifying their number in https://tafcop.sancharsaathi.gov.in/telecomUser/

Currently upto 9 SIM cards are issued to a single person and bulk cards issue for companies are issued through an authorized signatory to be registered by the organizations with DOT.

PS: It is possible that most of the telecom companies have not introduced security measures as envisaged in the December 1, 2023 guideline and compliance auditors need to ensure that they specially check a sample of the retail stores to ensure that proper systems are in place at the SIM card issuing outlets.

Naavi

Posted in Cyber Law | Leave a comment