The Shape of things to come-14: Automated Means of Processing and Automated Decision making

P.S: This series of articles is an attempt to place some issues before the Government of India which promises to bring a new Data Protection Law that is futuristic, comprehensive and Perfect. 

In our continued discussion on “The Shape of Things to Come”, we have so far discussed the following.

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data 

We now proceed further….


Automated Processing and Automated Decision Making are two concepts which need some clarity in the law.

In the PDPB 2019, the term “automated means” was defined as under.

Section 3 (6) “automated means” means any equipment capable of operating automatically in response to instructions given or otherwise for the purpose of processing data;

One of the operational sections referring to “Data which is processed through automated means” is Section 19 which refers to Data Portability.

This section was as under.

“Section 19: Right to Data Portability

(1) Where the processing has been carried out through automated means, the data principal shall have the right to—

(a) receive the following personal data in a structured, commonly used and machine-readable format—…..”

As against this use of the term “Automated Means” in India  which applies to all forms of processing by the use of Computer devices, Article 22 of GDPR refers to “Automated Individual Decision making, including profiling” and states as under.

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

We can observe that GDPR refers to “Automated Decision Making” while PDPB  2019 referred to “Automated Means of Processing”. These two are different. The Indian definition refers to all forms of processing using a computer device while the GDPR definition restricts to situations where the processing leads to a certain decision which may have some consequence on the data subject such as providing or rejecting a service or changing the profile of a person to reflect an adverse view.

It is necessary to clarify both terms distinctly.

This is important even for the discussion on whether “personal data disclosed to a computing device but not to a human” should be considered as “Disclosure” or not, which we discussed in our earlier article on “Definition of Privacy”

where we added an Explanation as follows:

“Sharing” in the context above means “making the information available to another human being in such form that it can be experienced by the receiver through any of the senses of seeing, hearing, touching, smelling or tasting of a human in such a manner that the identity  of the individual to whom the data belongs may become recognizable to the receiver with ordinary efforts”.

In the above definition, we specified that only when a personally identified information is viewable by a human being, it would be considered as a “Disclosure”. If the information is processed by an automated system which provides an output which does not have personally identifiable information, the processing is an “Anonymized Processing”. Such processing would be a combination of two processes one of which is “Anonymization”, but both occur within the combined process so that no human views the output in an identifiable form.

The essence of the definition was that such processing did not require explicit consent and could be undertaken by the processor as part of his legitimate interest.

There is a parallel instance in the general legal environment also which we refer to as “Privileged Information”. Certain information disclosed to a Lawyer or a Doctor is considered as “Privileged Information” and is not disclosable to others under a special confidentiality agreement recognized in professional law and ethics.

Similarly information disclosed to a “Process” may be considered as “Privileged Communication” and should not require specific consent even when it contains identifiable information. However, the “Process” is not empowered to disclose the identified information after processing. In the human scenario, the compliance is left to the integrity of the individual while in the case of a process, the compliance is a factor of integrity of the software which can be audited at code level and certified or a suitable assurance provided.

The concept of “Privileged Communication” can be extended to parts of “Legitimate Interest Disclosure” such as when identifiable personal information is disclosed to law enforcement personnel.

With this in view the following definition may be added in the definition clause.

Automated Means:

“Automated means” means any equipment capable of operating automatically in response to instructions given or otherwise for the purpose of processing data;

Automated Decision Making:

“Automated Decision Making ” means a process through which a decision is arrived at by  without any human involvement as a part of the process.

Privileged Communication

Privileged Communication means disclosure of identifiable personal information to another human or a device with enforceable restrictions on further disclosure of the information in a processed form to another human being.

Explanation:

Disclosure of identifiable personal information to a technical process which processes the information and creates an output in anonymised form is a privileged communication to the device.

Disclosure of identifiable personal information or de-identified or pseudonymised information to another human being such as a law enforcement person with an enforceable further restriction of disclosure in identifiable manner is also a privileged communication.


P.S: These discussions are presently for a debate and is a work in progress awaiting more inputs for further refinement. It is understood that the Government may already have a draft and may completely ignore all these recommendations. However, it is considered that these suggestions will assist in the development of “Jurisprudence” in the field of Data Governance in India and hence these discussions will continue until the Government releases its own version for further debate. Other professionals who are interested in participating in this exercise and particularly the Research and Academic organizations are invited to participate. Since this exercise is too complex to institutionalize, it is being presented at this stage as only the thoughts of Naavi.  Views expressed here may be considered as personal views of Naavi and not that of FDPPI or any other organization that Naavi may be associated with. 

  1. Introduction
2. Preamble 3.Regulators
4. Chapterization 5. Privacy Definition 6. Clarifications-Binary
7. Clarifications-Privacy 8. Definitions-Data 9. Definitions-Roles
10. Exemptions-Privacy 11. Advertising 12. Dropping of Central Regulatory authority
13. Regulation of Monetization of Data  14. Automated means ..

 

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.