Today is Digital Society Day of India

Today is 17th October, a day of special significance  to all those who use internet and computers or mobiles in India.

On this day in 2000, the Information  Technology  Act 2000 was notified.  This gave  legal recognition  to electronic  documents,  the method of signing an electronic  document  as well as a method for presenting an electronic  document  as evidence  in a Court.

All this together  ushered in the Digital  Society  of India.

I will  be celebrating the day at NALSAR Hyderabad  with  the students.

Hope  some day the MEITy will  realize  the importance  of the day and declare  it as the Digital  Society  Day of India officially.


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Anonymization and Avatars of Data

“Anonymization” takes personal data out of the purview of most data protection regulations. Hence it is one of the objectives of data protection compliance managers to mitigate the data protection risks by pushing part of the “Protected Data” out of the “Protection Zone” by “Anonymizing it.

In the Indian PDPA, the Data Protection Authority is eventually expected to provide an explanation of when a “Personal Data” is deemed to be “Anonymized”.

For an organization, “Data” to be Governed includes “Personal Data” as well as “Anonymized Data”. Just because a certain data element is anonymized, it may not mean that it is no longer an asset that need not be secured. In fact, many organizations may acquire “Identified Personal Data” at a cost and there after spend more to anonymize it. So, Anonymised data may be more valuable as an asset than the identified data from the “Cost of Acquistion” point of view.

However, the need to secure “personal data” because of the regulations and the possibility of a heavy financial penalty in case of failure introduces another element of “Opportunity Cost” to the identified personal data arising out of data breach and/or non compliance of data security regulations.

A Corporate Manager who is interested in “Data Governance” (Data Governance Officer or DGO) is concerned both with the “Cost of Acquisition” as well as the “Cost of non compliance”. The “Data Protection Officer” (DPO) on the other hand is interested only in the “Non Compliance Cost”.

“Anonymization” is a process that acts as a gateway between the DGO’s territory and DPO’s territory. The DGO hands over the data as “Identified Personal Data to the DPO for Compliance management. At the same time, he would have retained what is classified as “Anonymized Data”. The anonymized data may go for a separate shop floor for a process of adding value through “Data Analytics”.

If however, the “Anonymization Process” is not good enough, then the organization would be exposed to the re-identification risk. The demand for penalty in that case would come from the supervisory authority to the DPO.

DPO is therefore responsible for the “Adequacy” of the “Anonymization Process”. In fact if a company adopts “Anonymization” as a part of its Data Management policies then the “Anonymization Process” should be subjected to a DPIA (Data Protection Impact Assessment) by the DPO.

Probably these are situations when there would be a conflict between the DGO and the DPO. While the DPO may blame the DGO for imperfect anonymization, DGO may blame the DPO for “Motivated Re-identification” in a downstream process.

Let us leave this conflict to be resolved by the  proper structuring of the “Data Governance Framework” which should include the “Data Protection Framework as a subset”.

In the meantime, let us briefly look back on  Naavi’s Theory of Data and see whether this theory can recognize the journey of data from “Personal Data” status to “Anonymized Data” status.

In the theory of data, we had included a “Reversible Life Cycle Hypothesis”. This was part of the three hypotheses that made up the theory including the other two hypotheses namely the “Definition hypothesis” and “Additive value hypothesis of ownership” .

The essence of the theory was that “Data is Constructed by technology and Interpreted by Humans”, Data undergoes a lifecycle of birth to adulthood to different stages of maturity and then death, providing ownership to different persons for different value additions”.

If we try to trace the life cycle of personal data through anonymization we can identify that data goes through different phases of development in which it will assume different avatars as shown in the diagram above.

A Company may normally acquire data in the form of a limited personal data collected in a web form or when a netizen clicks on a web advertisement or visits a website. At this point of time the company may get some limited identity parameters such as the IP address of the person and possibly the name and email address he fills up on a web form. This Limited personal data later may acquire the status of an “Irrevocably identifiable personal data”  if some elements of identification such as a PAN number or a Mobile number etc is collected or become a sensitive personal data if the collected data elements include specific data elements.If processed into a profile it may become profile data.

If the company removes the identity parameters and keep it separately, it may become “Pseudonymized data”. If the identity parameters are irrevocably destroyed the data may become “Anonymized Data”. The anonymized data may be aggregated into big data.

In between all these categories, part of the limited identity personal data or identified personal data or anonymized data may be called “Community data” if it contains the data of a group of individuals.

In all the above avatars  the “Corporate Data” is a class of its own and may be further classified as IP data, Business Intelligence data, HR data, Finance Data etc.

While the “Data Protection laws” may apply to Personal data, Sensitive personal data and profile data, Cyber Crime laws such as ITA 2000 will apply to all data including personal data. In future, a Data Governance Act of India may also come to apply to “Non Personal Data”, “Aggregated Data”, “Community Data” etc.

The fact that “Data” exists in multiple forms and one can change into other and back is a point which is well captured by the “Reversible life cycle hypothesis of the Theory of Data”. The fact that different laws may apply to it at different stages is also explained by the life cycle hypothesis. The only difference between the human life cycle and the data life cycle is that data life cycle can be reversed in the sense that non personal data can become personal data and later come back to non personal data status. Humans may not be able to do so except when they are  mythological characters like …Yayati and Puru.

What the Theory of Data highlights is that any regulation which does not take into consideration that “Data” changes its nature in the ordinary course of its usage and a “Dynamic Data” requires a “Dynamic Regulation” will have problems.

In the human equivalent, we have the issue of a law applicable to juveniles being different from the law applicable to adults. similarly law applicable to unmarried may be different from law applicable to married, law applicable to men can be different from law applicable to women, law applicable to Hindus may be different from law applicable to Muslims and so on.

Just as there is strength in the argument that there should be a “Uniform” law for humans, there should also be an attempt to explore if “One comprehensive law of data” can cover both Personal Data and Non Personal Data.

In view of the important transition of applicable regulations when data crosses the border of anonymization, the management of the anonymization gateway is a critical function of Data Governance.

One debate that has already come up is whether there can be a “Standard of Anonymization”?

If so, how will it be different from de-identification standard which defines certain parameters as “Identity parameters” and if they are not present in a data set, the data set is considered de-identified or otherwise it is identified”.

The “Anonymization standard” cannot be that simple since it should be considered computationally infeasible to re-identify an anonymized data.

“Computational Infeasibility” of re-identification comes from the erasure of the “Meta Data” which needs to be irrevocably removed. We all know that if we create a word document, the details of the author is perhaps known to “Microsoft”. If therefore the document is to be anonymized, we need to check if whatever meta data is associated with the document and wherever it is stored, is permanently destroyed.

“MetaData identifier Destruction” could perhaps be the difference between the “De-identification/Pseudonymization” and “Anonymization” .

In Forensic destruction of data, early DOD standards required data erasure for several times  before a data holding device is said to be sanitized. This implies that even when data is forensically erased, a certain number of repetitions are required to ensure that the process cannot be reversed by an intelligent de-sanitization algorithm.

The essence of this “Anonymization” through forensic over writing of data bits is to randomize the overwriting process so that it cannot be reversed.

The standard of anonymization that can be recommended to DPA is therefore not necessarily over writing all bits to be sanitized with a zero bit several times.

It can be different and is aimed at randomizing the binary bit distribution in the data holding device. An example of such a process could be..

a) Overwrite all the bit sets that represent the identification parameters with zero but in a random sequence. (This presupposes that the data set can be divided into identity parameters and other data associated with the identity parameters)

b) Repeat by over writing all the bit sets once again with say 1 again in a random sequence

c) Repeat by spraying zeros and ones randomly on all the data bits in the zone

This process may leave a random distribution of zeros and ones in the selected zone which cannot be reversed. As long as the rest of the data does not contain any identity parameters, the data can be considered as “Anonymized”.

May be technology experts can throw more light on this.



DOD standard for data erasure

Print Friendly, PDF & Email
Posted in Cyber Law | 3 Comments

The Roadmap of PDPA

Personal Data Protection Act of India (PDPAI) by whatever name it will be finally called is expected to be tabled in the winter session of Parliament. (See Report here). Though the Government is under an obligation to the Supreme Court in the Aadhaar case to pass the law at the earliest, this session is likely to be also kept occupied  with the proposed Uniform Civil Code Bill. Hence it is not clear if substantial progress can be made on the passage of the bill during the session.

The industry lobby is however interested in the deferment of the bill until its demand on dilution of the  “Data Localization” requirement is conceded.  One of the tricks which may be used is to push the bill into a Standing Committee which may delay the passage by an indefinite time.

Though the bill may require some final touches after it is presented, we must appreciate that the bill was drafted under the direction of Justice Srikrishna and would have been further refined after receipt of public opinion.  During the discussion in the Parliament itself more refinements will come up for discussion. Hence the need for sending it into a standing committee is low. But the vested industry interests would do their best to ensure that the passage of the bill is delayed by insisting on the bill being sent to the standing committee.

Once the bill is passed by both the houses and gets the assent of the President, the Act will become effective.

Government may not exercise the discretion to make the “Notified Date” different from the “Notification date of the Act” as provided for under Section 97 though a window of 12 months has been provided for the notification of the “Notified Date”.

On the Notified date, the power to make rules and establish the DPA will be with the Government. Within the next 3 months the DPA needs to be appointed. This will be a body of 6 persons with a designated chair person.

Once the DPA is formed and the infrastructure such as the office place and secretariat is provided, the responsibility for further action shifts to the DPA.

The first phase in the road map will therefore be the establishment of the DPA and nothing more.

Subsequently, the DPA will have to draft several regulations as “Rules” and notify the same through a Gazette notification.

Before 12 months from the “Notified Date” DPA will bring out the first set of regulations which will consist of the “Grounds of Processing of Personal Data”. At this time the DPA has to define what is “Personal Data” and what is  “Anonymised Data” besides clarifying the applicability of the Act to processing carried out outside India by Indian and non Indian entities.

“Anonymisation” has been defined under the Act as under

Anonymisation in relation to personal data, means the irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified, meeting the standards specified by the Authority.

Personal data has been defined under the Act as under

“Personal data” means data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, or any combination of such features, or any combination of such features with any other information;

Both these sections need to be elaborated in the rules indicating what is not a personal data and what does not constitute “Anonymisation”.

Additionally the “Codes of Practice” which will cover the substantial aspects of the regulation also “Within 12 months” from the date of notification.

The Government may chose different notification dates for “Notifying the Grounds of Processing as per Section 97(5) and the code of practice as per Section 97(6).

The rules regarding Cross border restrictions for transfer would be notified on a searate date  as per Section 97(7)

The residual regulations would be notified within  18 months of the notified date as per Section  97(8) and this date may be different from the date under 97(7).

The entire road map as per Chapter XIV  is captured here

In the industry there is already some efforts to provide inputs to the Government on how the regulatory process needs to be streamlined. The effort  of select private entities to be part of the regulatory process is to be appreciated though excessive concern is not warranted. For Government legislation is a day to day affair and the officials are well equipped to go through the process systematically.

We may however continue to provide inputs on some of the more technical and legal features of the regulations.




Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Is Data Governance a subset of Data Protection or is it the Vice Versa?

So far discussions on Data Governance was restricted to the Big Data players. The Data Security professionals were more focussed on “Data Security” and everything else took a secondary place.

In organizations pursuing GDPR compliance, the DPOs became key senior executives reporting directly to the CEO and called all the shots when it came to inter departmental conflicts such as whether a new client or process has to be onboarded.

Slowly the Data Governance is regaining its voice and now discussions are on about how Data Governance and Data Protection should co-exist.

The Data Governance approach is basically to look at “Data” as an “Asset” of an organization. In management parlance, any asset  may be bought as raw material, converted into a finished product and re-sold. The “Value addition” which maximizes the finished product price realization and reduces the cost of inputs is the responsibility of managers. The “Productivity” of every production parameter as well as the “Raw Material itself during the processing” is the key focus of the data managers.

Since every asset of the Company has to be protected from loss or pilferage, it was necessary to consider “Security of Data” as one of the parameters that the Data Governance Manager was expected to consider as part of his responsibility.

Even if the “Security” required was of the highest order, the  productivity of the “Asset” was still the key and “Security without Productivity” was not the preferred objective.

However, when Data Security professionals came to rule the corporate decision making, there was a new found empowerment to the data protection professionals some of whom might have considered overplaying their part because the GDPR imposed blinding fines.

The discussion was therefore whether “Security at all costs” even with lesser productivity of the asset was the way to approach the Data Security and Data Governance functions.

The CEO therefore had a new problem of balancing the two functions and ensure the business interests of the organizations.

Though as consultants, we did emphasize that there was “Legitimate Interest” of an organization that could be considered while adopting the stringent data protection regulations under GDPR, soem consultants coming entirely from the legal background were paranoid with the regulatory aspect that the “Legitimate Interest” was very much diluted. Some of the Supervisory authorities including perhaps ICO of UK also supported some impractical views of how to interpret the Data Protection Principles.

Some ill informed activists even sent disturbing e-mail notices to Indian companies when legitimate business contacts were made bringing the debate of how much the “Business E Mail” could be considered as ” Personal Data”.

The ensuing debate on Data Governance has to once for all settle the issue of whether Data Security is a subset of Data Governance or it is the otherway round or whether both are to be considered with equal weightage and managed.

One question that will be asked is whether personal data can be sold under GDPR if there is a consent?

We will be discussing more of this in the coming days….

Donot forget to attend today’s FDPPI webinar on Data Governance at 5.30 PM on Zoom. Contact for invitation.



Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Additive Value hypothesis of ownership of data

Out of the three hypotheses which we took up for discussion in constituting the Naavi’s Theory of Data, we have so far discussed the “Definition hypothesis” and “Reversible life cycle hypothesis”. We shall now take up the third hypothesis through which we shall discuss the issue of how we can interpret the “Ownership of Data”.

In all the regulatory discussions on “Data Protection”, there is a concept of the “Data Subject” (or the “Data Principal” )providing “Consent” to another person as an expression of the data subject’s choice of how his personal data can be used by the recipient. In certain laws, it is specified that there are some basic data protection principles to be followed, there are certain basic rights of the data subject  recognized and certain obligations of the data recipient in terms of security, disclosure etc.

The “Consent” is almost always recognized as a “Contract”.

Certain regulations are clear in defining that “Personal Data” is a “Property” of the data subject and the “Consent Contract” is transferring some part of or all of the property to the recipient.

It is interesting to note that even in regulations which consider that personal data is a personal property which can be “Sold” for a consideration, there is no mention of whether the sale is “Exclusive” or “Non Exclusive”. We know that unlike the physical property, “Data Property” can be transferred to another person even while a copy of the same remains with the transferor. In fact, if there is a legal challenge, the copy with the recipient will be considered as “Original” and the copy with the recipient as a “Secondary copy”.

Though not specifically mentioned, some laws imply that “Personal Data” or “Elements of Personal Data” are “Transferable” as a “Right to Use”. So what happens in the transfer of personal data information from the data subject to the data recipient is that there is a  disclosure with a transfer of right to use, process, share or otherwise dispose off the transferred elements of data either together or independently.

But even the best Consent form drafted by the best GDPR lawyer in the world has never properly indicated that the

“Personal Data now being disclosed consists of several data elements and this consent is deemed as an offer/acceptance to transfer the rights of individual data elements contained here in like the name, email address, etc., to the limited extent of it being used for the purpose for which this consent is being provided as understood by me, namely ………. and that the right is collective/applicable to each data element individually”

If the transfer of right is for “Collective” data elements, then use of the data for aggregation even after de-identification or anonymization becomes a violation of the terms of the contract.

Thus the ownership of data as is presently understood as a “Property” has a problem in not being able to identify if it extends to the whole set given at a particular point of time, whether it can be considered as multiple consents for each of the data elements.

Then the question of what is the legal instrument by which such transfer can be effected also is difficult to recognize since “Data” as a property cannot be classified either as movable or immovable or actionable right or as an Intellectual Property Right such as Copyright, Trade mark or Patent for which separate laws and definitions exist.

Even if the instrument of transfer is a “Contract”, there is a need to define what is the “Property” being transferred. If there is ambiguity on the same, then the Contract may fail due to lack of “Meeting of the Minds” between the contracting parties.

In India we also have an issue that “Click Wrap” contracts only have the status of an implied contract and the onerous clauses that may be included there in may become voidable like in a standard form contract/dotted line contract.

The Theory of Data should therefore provide such explanations as necessary to ensure that this transfer is properly explained.

The Processing Value

One of the other areas where the existing explanations on the ownership of personal data fail miserably is when the processor takes in the personal data as provided by the data subject and through his own contributions, creates a new product out of it such as a “Profile” or “Community Data” or “Aggregated Anonymized Data” etc.,

Under the current regulations like GDPR, it is interpreted that all this value added versions that emanate from the original set of personal data belongs to the personal data owner and when he withdraws his consent or wants to exercise his right to forget, all the derivatives have to be destroyed (Except perhaps the anonymized data elements). If the data subjects want the data to be ported back to him or another processor, then the entire derivative set including the profile created by one processor has to be perhaps transferred to another processor who may actually be a business competitor of the first processor.

This interpretation will seriously conflict with the law of intellectual property rights where it is already an established view that a value added data base creation is an intellectual property of the creator and is different from the value of the raw data.

We have also in the past given the following two examples that establish that the theory that all super structures built on the personal data become the property of the personal data subject as if it is a land on lease on which buildings are constructed and which has to be returned back to the land owner after the expiry of the lease contract.

Example 1:

A person gives a piece of lemon to the processor. He crushes it adds water and sugar and creates “lemon Juice”. Then the lemon owner withdraws his consent on the use of lemon and wants the lemon or juice back. Obviously the processor cannot give back the lemon. But will he be required to return the lemon juice which is more valuable than the original lemon since additional cost inputs have gone in?

Example 2:

A person gives a piece of Coal to the processor. The Processor uses a compression technology to compress the coal and convert it into its allotropic form of a “Diamond”.

(P.S:  Allotropy is the property by virtue of which an element exist in more than one form and each form has different physical properties but identical chemical properties. These different forms are called allotropes. The two common allotropic forms of carbon are diamond and graphite)

Now if the owner of “Coal” wants the “Diamond” back, how fair is the demand?

In the Indian draft legislation of “DISHA”, an attempt has been made to define that the medical diagnostic reports of a patient developed by a diagnostic center or the hospital is the property of the patient. Unless the law clarifies that the medical report has a value and the patient is entitled to get a copy of the report only if he pays the value in terms of the fees charged for the diagnosis, there may be a legal conflict when the patient demands that the information should be returned whether or not he has paid the fees.

We can therefore conclude that there is a shortcoming in the present theories of data ownership either as a “Property” or as a “Right”. We need a better explanation of the “Data Property Ownership”.

Under this new Theory of Data being propounded, I therefore propose a hypothesis as follows:

“The ownership of Data is applicable to individual data elements and belongs to the person/entity who creates the said element of data that enhances the value of the associated  set of data elements”.

What this means is that as the “Data” is born and then grows as explained in the life cycle hypothesis, the value of the data set undergoes a change. Different persons are responsible for the change. The data subject is of course one party who may be involved in most of these value changes but there are others who contribute to the value addition.  The ownership of the data has to be recognized with a segregation of the data in current form into different value units and ownership has to be recognized to the persons who are responsible for the value addition.

For example, let us say Company C  floats a service and Mr P opts to become a member. P provides some set of personal data of which he is the owner. Company C creates a “Profile”. The “Profile” data if attached to the original raw data provided by the data subject is more valuable and marketable. Company C can realize say Rs 100 for this profile data where as Mr P had zero value for his name, address etc which he might have shared in the first place with Company A. Let us say that Company C has been fair to Mr P and paid Rs 10 for the collection of the raw personal data and had also agreed that 10% of  any further value realization that may be attributed to the personal data would be paid to him like a “Royalty”, then there is a fair distribution of the “Value Addition”.

In such a concept, B will be an owner to the extent of 10% of the value in the hands of C and C will be the owner to the extent of Rs 90.

This “Additive Ownership” concept co-exists with “Additive Value Realization” of the data as it matures in its life cycle. The Value realization can be “Notional” in the sense that C may not sell the data so a third party but transfer it to another division of its own in which case the “opportunity benefit” has to be recognized as a “Transfer Price”.

If there is therefore collection of personal data for processing, the processor may after defining the purpose may also indicate the notional value of the processed personal data and the price he is willing to recognize as payable to P either immediately or after a certain event or time.

This “Additive Value Ownership” will have a cascading effect and there could be multiple owners of the Data for each recognizable value addition. The composite data set at any point of time will therefore be considered as having made up of multiple sub sets each of which is attributable to one processor and the ownership of that value addition remains with the entity that contributes that value.

This also means that each such part owner of the value has a right to transfer his property to another person unless his contract of value creation prohibits the same. Law should however prohibit restrictions that restrict such transfers as an “Unfair Contract” and therefore enable the free flow of value addition for any given data set so that the society at large benefits.

This discussion will continue…


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Reversible Life Cycle hypothesis of the Theory of Data

This is in continuation of our discussion on the Theory of Data to explain “What Data is” for different communities like the technologists, lawyers and business manager. In this direction we have stated that there are three hypotheses that we want to explore and the first of such hypothesis was the thought that

“Data is a congregation of fundamental data particles which come together to form a stable in a meaningful pattern for a software/hardware to enable a human being to make a meaning out of the pattern.”

If we take an expression ’10’ and ask several people to read it as ‘data’, then perhaps most of them will read it as number ten”. But ask a “Binary Reader” who knows the language of “binary” like I and you know English, he will say ’10’ is decimal number two.

[This is not much different from asking people to read “Ceci est une pomme”. Not many would be able to understand this. But those who know French may understand this equivalent to “This is an apple”. ]

Can ’10’ be ‘Ten’ and ‘Two’ at the same time for different people? The answer is a definite yes since the human understandable meaning of  data ’10’ depends on the cognizable power of the human. “What Data is”,  therefore cannot be expressed in “absolute” terms. But it is relative to the language which a human uses to “Experience” the data. We use the word “Experience” here since data can be read as a text or seen as an image or heard as a sound depending on the capability of the converter of the binary notation to a human experience of reading, seeing or hearing.

If we go a further step deeper, the binary data ’10’ is not existing on a hard disk as an etching of ‘1’ and ‘0’. It represents a “State” of a region of the hard disk whether it carries a current or not , whether it is magnetized in one direction or the other, whether it is a light which is in on or off state etc.

The fundamental data particles which we often call as binary bits do not have a specific form. If the data interpreter is capable of seeing the ‘lights on and off” and convert it into a human understandable text, it is fine. If the interpreter can sense the magnetic state, then also it is fine. If the data is defined as the “Spin state” of an electron or a nucleus as in Quantum Computing and the data interpreter can identify the spin states, then that type of data representation is also acceptable.

But in all these cases , “Data” is not “Data” unless there is a pattern to the data particles coming together and staying together until they are ‘observed by the interpreter’. If the data is unstable and is in a chaotic condition, the data particles may be there but they do not represent any meaningful data.

The fundamental data particles existing in a chaotic state and existing in a stable pattern are two states which are like a human foetus  before life enters and after life enters. This is the concept of “Data Birth“.

Once a “Data Set” which is a congregation of a stable pattern of fundamental data particles is formed, it can grow bigger and bigger by adding more data bits or more units of data sets. This is the horizontal and vertical aggregation of fundamental data particles.

Horizontally, when ’10’ becomes ‘10111000’  it becomes number one hundred eighty four.

Similarly when a stream of binary such as ‘01000001 01001110 01000100’ is read through a binary-ascii converter, it reads as ‘AND’. The same pattern reads as 4279876 in a binary-decimal converter.

Thus ‘1’ can grow into ’10’ and further to ‘10111000’ etc in a horizontal direction.

When there is a text ‘vijay’ and this is combined with another data element which reads as ‘’, then we have a composite data set which a human may recognize as name and e-mail address. This composite data set is considered as “Personal Information”.

Thus, an alphabet grows into a name horizontally and combines with an e-mail address vertically to become “Personal information”.

Thus “Personal information” is a grown up state of the data which started with a single data cell of 1 or o added other cells just as a human cell grows into a foetus, acquires life on the way, gets delivered as a baby, grows into a child, adult and so on.

A similar “Life Cycle” can be identified in the manner in which “Data” gets born within a control environment (say within the corporate data environment) and then changes its nature from a foetus without life to a foetus with life, a delivered baby, a child, an adult etc.

Somewhere during the journey, the personal data may become sensitive personal data or lose some of the characters and become anonymized data or wear a  mask and become pseudonymized data and finally may get so dismembered that the data set integrates from a “Composite data set” to “Individual data sets” and further onto “fundamental data particles”, losing the “stable pattern” which gave it a “Meaning”. This is like the ‘death’ of a human being.

Thus the “life cycle” of data is comparable to the life cycle of a living being.

Just as there is a law for an individual when he is a minor and it is different from law of an adult, there is a law for information which is “Personal” and information which is “Not personal” etc. Just as there is a law for married women different from law for married man, there could be different laws for data which is just ‘personal’ and data which is ‘sensitive personal’.

This  “Life Cycle hypothesis” of data therefore can explain how the technical view of “Data” as binary bits can co-exist with the legal view of “Data” being “Personal data”, “Sensitive personal data”, “Corporate data”, “Anonymized data”, “pseudonymized data” etc.

As long as we understand that it is the same “Core Human” who was once a foetus without life and thereafter foetus with life, became a baby, child or adult, a senior citizen or a corpse and finally burnt to dust and joined the five elements from which the foetus was first formed, we must understand that “Data” is “Dynamic” and changes it’s form from time to time.

Just as a human in his family is “an identified person” but in a Mumbai local he is an “Anonymized person”,  the data recognition as personal or non personal may have nothing to do with the data itself but by the knowledge of the people around.

Just as an anonymous person in a crowd may behave as a beast but turn tender when he sees known people around, anonymized data contributes differently to the society from the identified data.

Data starts its journey as a “Data Dust” and returns to the same state after its death. This “Dust” to “Dust” concept is also similar to the human life as interpreted by philosophers in India from times immemorial. At the same time the “Soul” in a human is indestructible and enters and leaves the body at different points of time. Similarly, in the Data life cycle, the soul is the “Knowledge and Cognizable ability of the observer” and it remains with the observer even after the data itself has been ground to dust by a “Forensic Deletion”. No body can destroy the knowledge already set in the observer’s knowledge base and out of his memory he may be even be able to re-create a clone data set.

The essence of this “Life Cycle Hypothesis” is that “Data” does not exist as “Non Personal Data” or “Personal Data” etc. It is what it is. But we the people with knowledge about the data make it look “Identified” or “Anonymous”. But by our ability to identify or not identify a data with a living natural person, the utility of the data set is being changed without the data set needing to do anything of its own.

The “Data Environment” is therefore what gives a character to the data. In other words, the tag that we provide a data as “Personal” and “non Personal” is more a contribution of the environment than the “Data” itself. No doubt the identity has a genetic character of its own. But the final identity is given by the environment. This is like in a mall where a CCTV can identify a person approximately 6 feet, well built, with bald head teasing a fair looking young girl. In this data capture, the identity of the person or the lady is not known to all. But if we equip the data environment with a face recognition software and a relevant data base, then the data which was anonymous becomes data which is identifiable. This conversion did not happen because the data was different. It was because the “Cognizable Ability” of the observer was different.

If therefore the confidentiality of the people has to be maintained, then the responsibility for the same is with the “Face recognition software” and the background data base rather than the “CCTV camera”. The law should therefore factor this and not be blind to say “CCTV violates Privacy”.

If the background data base which identifies the face is either incorrect or the AI which does the recognition has not been properly built, the face recognition may go wrong. Then law should recognize that “Data” is benign and its character is what is contributed by the software, hardware etc and if there is an error resulting in say “Defamation”, it is the interpreting software manufacturers who should be held liable as an “Intermediary”.

Life Cycle hypothesis of data therefore extends the earlier hypothesis of “Data is constructed by technology and interpreted by humans”.

This lifecycle concept of data has one interesting outcome. In “Data Portability” and “Data Erasure” or “Right to Forget”, we have a problem when the raw data supplied by the data subject has been converted into a value added data and a profile of the data subject by the data processor. When the data subject requests for data portability or data erasure in such instances, the dilemma is whether the entire data in profile form has to be ported or destroyed or it is only the raw data supplied by the data subject which needs to be returned or destroyed.

In the case of a human being, if a person adopts a baby who grows into an adult and the erstwhile parents want the baby back, it is not possible to return the baby. Because the human cycle of growth cannot be reversed ( atleast by the technology we know today).

We may therefore qualify the “Data Life Cycle Hypothesis” that this life cycle is “Reversible” unlike a human life cycle.

I am sure that this is only a core thought and the readers can expand on this thought further… Whenever an argument ensues between a technologist and a lawyer on what is data, what is personal data, why there is a certain regulation etc., then we may subject the argument to this life cycle hypothesis test and see if the view of both persons can be satisfactorily explained.

Watch for more….


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment