Reversible Life Cycle hypothesis of the Theory of Data

This is in continuation of our discussion on the Theory of Data to explain “What Data is” for different communities like the technologists, lawyers and business manager. In this direction we have stated that there are three hypotheses that we want to explore and the first of such hypothesis was the thought that

“Data is a congregation of fundamental data particles which come together to form a stable in a meaningful pattern for a software/hardware to enable a human being to make a meaning out of the pattern.”

If we take an expression ’10’ and ask several people to read it as ‘data’, then perhaps most of them will read it as number ten”. But ask a “Binary Reader” who knows the language of “binary” like I and you know English, he will say ’10’ is decimal number two.

[This is not much different from asking people to read “Ceci est une pomme”. Not many would be able to understand this. But those who know French may understand this equivalent to “This is an apple”. ]

Can ’10’ be ‘Ten’ and ‘Two’ at the same time for different people? The answer is a definite yes since the human understandable meaning of  data ’10’ depends on the cognizable power of the human. “What Data is”,  therefore cannot be expressed in “absolute” terms. But it is relative to the language which a human uses to “Experience” the data. We use the word “Experience” here since data can be read as a text or seen as an image or heard as a sound depending on the capability of the converter of the binary notation to a human experience of reading, seeing or hearing.

If we go a further step deeper, the binary data ’10’ is not existing on a hard disk as an etching of ‘1’ and ‘0’. It represents a “State” of a region of the hard disk whether it carries a current or not , whether it is magnetized in one direction or the other, whether it is a light which is in on or off state etc.

The fundamental data particles which we often call as binary bits do not have a specific form. If the data interpreter is capable of seeing the ‘lights on and off” and convert it into a human understandable text, it is fine. If the interpreter can sense the magnetic state, then also it is fine. If the data is defined as the “Spin state” of an electron or a nucleus as in Quantum Computing and the data interpreter can identify the spin states, then that type of data representation is also acceptable.

But in all these cases , “Data” is not “Data” unless there is a pattern to the data particles coming together and staying together until they are ‘observed by the interpreter’. If the data is unstable and is in a chaotic condition, the data particles may be there but they do not represent any meaningful data.

The fundamental data particles existing in a chaotic state and existing in a stable pattern are two states which are like a human foetus  before life enters and after life enters. This is the concept of “Data Birth“.

Once a “Data Set” which is a congregation of a stable pattern of fundamental data particles is formed, it can grow bigger and bigger by adding more data bits or more units of data sets. This is the horizontal and vertical aggregation of fundamental data particles.

Horizontally, when ’10’ becomes ‘10111000’  it becomes number one hundred eighty four.

Similarly when a stream of binary such as ‘01000001 01001110 01000100’ is read through a binary-ascii converter, it reads as ‘AND’. The same pattern reads as 4279876 in a binary-decimal converter.

Thus ‘1’ can grow into ’10’ and further to ‘10111000’ etc in a horizontal direction.

When there is a text ‘vijay’ and this is combined with another data element which reads as ‘vijay@naavi.org’, then we have a composite data set which a human may recognize as name and e-mail address. This composite data set is considered as “Personal Information”.

Thus, an alphabet grows into a name horizontally and combines with an e-mail address vertically to become “Personal information”.

Thus “Personal information” is a grown up state of the data which started with a single data cell of 1 or o added other cells just as a human cell grows into a foetus, acquires life on the way, gets delivered as a baby, grows into a child, adult and so on.

A similar “Life Cycle” can be identified in the manner in which “Data” gets born within a control environment (say within the corporate data environment) and then changes its nature from a foetus without life to a foetus with life, a delivered baby, a child, an adult etc.

Somewhere during the journey, the personal data may become sensitive personal data or lose some of the characters and become anonymized data or wear a  mask and become pseudonymized data and finally may get so dismembered that the data set integrates from a “Composite data set” to “Individual data sets” and further onto “fundamental data particles”, losing the “stable pattern” which gave it a “Meaning”. This is like the ‘death’ of a human being.

Thus the “life cycle” of data is comparable to the life cycle of a living being.

Just as there is a law for an individual when he is a minor and it is different from law of an adult, there is a law for information which is “Personal” and information which is “Not personal” etc. Just as there is a law for married women different from law for married man, there could be different laws for data which is just ‘personal’ and data which is ‘sensitive personal’.

This  “Life Cycle hypothesis” of data therefore can explain how the technical view of “Data” as binary bits can co-exist with the legal view of “Data” being “Personal data”, “Sensitive personal data”, “Corporate data”, “Anonymized data”, “pseudonymized data” etc.

As long as we understand that it is the same “Core Human” who was once a foetus without life and thereafter foetus with life, became a baby, child or adult, a senior citizen or a corpse and finally burnt to dust and joined the five elements from which the foetus was first formed, we must understand that “Data” is “Dynamic” and changes it’s form from time to time.

Just as a human in his family is “an identified person” but in a Mumbai local he is an “Anonymized person”,  the data recognition as personal or non personal may have nothing to do with the data itself but by the knowledge of the people around.

Just as an anonymous person in a crowd may behave as a beast but turn tender when he sees known people around, anonymized data contributes differently to the society from the identified data.

Data starts its journey as a “Data Dust” and returns to the same state after its death. This “Dust” to “Dust” concept is also similar to the human life as interpreted by philosophers in India from times immemorial. At the same time the “Soul” in a human is indestructible and enters and leaves the body at different points of time. Similarly, in the Data life cycle, the soul is the “Knowledge and Cognizable ability of the observer” and it remains with the observer even after the data itself has been ground to dust by a “Forensic Deletion”. No body can destroy the knowledge already set in the observer’s knowledge base and out of his memory he may be even be able to re-create a clone data set.

The essence of this “Life Cycle Hypothesis” is that “Data” does not exist as “Non Personal Data” or “Personal Data” etc. It is what it is. But we the people with knowledge about the data make it look “Identified” or “Anonymous”. But by our ability to identify or not identify a data with a living natural person, the utility of the data set is being changed without the data set needing to do anything of its own.

The “Data Environment” is therefore what gives a character to the data. In other words, the tag that we provide a data as “Personal” and “non Personal” is more a contribution of the environment than the “Data” itself. No doubt the identity has a genetic character of its own. But the final identity is given by the environment. This is like in a mall where a CCTV can identify a person approximately 6 feet, well built, with bald head teasing a fair looking young girl. In this data capture, the identity of the person or the lady is not known to all. But if we equip the data environment with a face recognition software and a relevant data base, then the data which was anonymous becomes data which is identifiable. This conversion did not happen because the data was different. It was because the “Cognizable Ability” of the observer was different.

If therefore the confidentiality of the people has to be maintained, then the responsibility for the same is with the “Face recognition software” and the background data base rather than the “CCTV camera”. The law should therefore factor this and not be blind to say “CCTV violates Privacy”.

If the background data base which identifies the face is either incorrect or the AI which does the recognition has not been properly built, the face recognition may go wrong. Then law should recognize that “Data” is benign and its character is what is contributed by the software, hardware etc and if there is an error resulting in say “Defamation”, it is the interpreting software manufacturers who should be held liable as an “Intermediary”.

Life Cycle hypothesis of data therefore extends the earlier hypothesis of “Data is constructed by technology and interpreted by humans”.

This lifecycle concept of data has one interesting outcome. In “Data Portability” and “Data Erasure” or “Right to Forget”, we have a problem when the raw data supplied by the data subject has been converted into a value added data and a profile of the data subject by the data processor. When the data subject requests for data portability or data erasure in such instances, the dilemma is whether the entire data in profile form has to be ported or destroyed or it is only the raw data supplied by the data subject which needs to be returned or destroyed.

In the case of a human being, if a person adopts a baby who grows into an adult and the erstwhile parents want the baby back, it is not possible to return the baby. Because the human cycle of growth cannot be reversed ( atleast by the technology we know today).

We may therefore qualify the “Data Life Cycle Hypothesis” that this life cycle is “Reversible” unlike a human life cycle.

I am sure that this is only a core thought and the readers can expand on this thought further… Whenever an argument ensues between a technologist and a lawyer on what is data, what is personal data, why there is a certain regulation etc., then we may subject the argument to this life cycle hypothesis test and see if the view of both persons can be satisfactorily explained.

Watch for more….

Naavi

Posted in Cyber Law | Leave a comment

Theory of Data and Definition Hypothesis

Out of the three  main Challenges that we are trying to address in this Theory of Data, the first and most fundamental challenge is a proper definition of a “Data”, which is acceptable to the Technology persons, the Legal Persons as well as the Management persons.

The hypothesis we propose is that

“Data is an aggregation of fundamental data particles which combine together horizontally and vertically to derive simple and composite data sets which have further use to humans based on the pattern in which the fundamental data particles get organized”.

 Horizontally, the fundamental data particles when broken into sets of 8, become a “byte”. Depending on the preference of technologists, the number of data particles in a standard set can be varied. Vertically, bytes can be added together to constitute larger composite data sets.

At the first level when fundamental data particles come together randomly, the data has no cognizable meaning to a human being.  As the data particles come together and stay together, a pattern develops. Certain patterns formed in such congregation become cognizable by interpreters  (software and hardware) created for converting the congregation of fundamental data particles into what humans recognize as text, image or sound when they become data at the human usage level.

This human understandable form of data is subject to regulations and other interpretations. Humans cannot ascribe meanings to data particles unless they are organized in a specific pattern. Such unorganized fundamental data particles are “gibberish” for the human user.

The human interpretation of a given composite data set is “Relative” to the cognizable ability of the user. Hence data which is understood by a human is always “person dependent”. Its interpretation is “Relative” to the person’s ability. Where the person does not have the ability to understand the presented data pattern because he may not have the right reader (Software or hardware) he will still see only  “gibberish”.

When the compatible readers are used, the human can view the data as “Text” or “Sound” or “Image”.

The categories of data which we normally recognize as “Personal data”, “Non Personal data” etc are all interpretations of humans based on their own perceptions and not an “Absolute Truth”. No data is “personal” or “non Personal” per-se. They are interpreted so by the human because he follows a certain school of thought.

Data therefore does not have an absolute existence at the level of human recognition but is relative to the interpretive ability of the data user.

The principle we should recognize here is “Data is in the beholder’s eyes”. Data is constructed by technology but interpreted by humans.

If some call “Data” as “Original Data” and produce a hard disk to a Court as “Evidence”, it is to be recognized that there are a certain data patterns in the hard disk which some (may be a majority of people) recognize as some kind of text, image or sound which is the evidence presented. This principle is already being used in Indian law in the form of Section 65B of Indian Evidence Act.

Watch for more….

(P.S: In subsequent discussions in 2020, this hypothesis has been renamed as Interpretation hypothesis”)

Naavi

Posted in Cyber Law | Leave a comment

New Data Theory of Naavi built on three hypotheses

In searching a proper expression and articulation of the Theory of Data, Naavi has decided to adopt a set of three hypotheses which are the pillars of this New Theory of Data.

The three hypotheses are

a) A hypothesis that tries to explain the definition of data

b) A hypothesis that tries to explain the life cycle of data

c) A hypothesis that tries to explain the ownership of data

The three hypotheses combine together in developing a comprehensive theory of data.

Watch out for more …

Naavi

Posted in Cyber Law | Leave a comment

Six amendments proposed to California Consumer Privacy Act

The California Consumer Privacy Act (CCPA) which is applicable to the collection and processing of the personal data of Californian residents is set to become effective from 1st January 2020.

CCPA has already distinguished itself by its honest approach to privacy protection  by specifically admitting the possibility of “Sale of Personal Data”. Unlike GDPR which does not provide clarity on whether personal data may be “Sold” even when there is an “Explicit Consent” and leaves the data processing companies in doubt, CCPA is clear in its prescriptions.

CCPA also recognizes a “Financial Value” for the personal data, recognizes the right of ownership of the data subject to deal with it even in commercial terms. While Privacy activists may debate the ethics of “Trading of Personal Data”, the fact is that this provision gives some breathing space for data dependent businesses.

Now before the act becomes effective, some amendments have been proposed and is likely to be discussed and probably passed before the January 1, 2020 deadline for implementation.

The Six amendments are as follows.

  1. Reasonable Authentication: 

CCPA shall allow a consumer to submit requests through a “Consumer Account”, if the customer maintains an account with the business.

The employee information collected in the course of a natural person acting as a job applicant, employee, owner,director, officer,medical staff member or contractor is exempted from the definition of personal information for one year (until January 2021)

The exemption also covers employee emergency contact information and information used to administer benefits, but it does not apply to a business’s obligation to provide notice to employees about its collection practices or employees’ eligibility for the data breach provision’s private right of action.

2. Classification of Personal Information

This amendment adds the phrase “reasonably capable of being associated with . . . a particular consumer or household.” to the definition of how a data is identified as a personal data.

The bill also clarifies that any information made available by federal, state or local government is “publicly available” and is not personal information.

The amendment also eliminates the provision of the CCPA stating that publicly available information that a company uses in a manner incompatible with the purpose for which it was originally collected by the government is considered covered personal information.

It also clarifies that personal information does not include de-identified or aggregate information

3. Right to Forget

The amendment adds a new exception to a consumer deletion request that allows a business to deny the request if the information is needed to “fulfill the terms of a written warranty or product recall conducted in accordance with federal law.”

It also creates an industry-specific exemption from the right to opt out of the sale of personal information for vehicle or ownership information maintained or shared between an automobile dealer and a manufacturer if it is maintained or shared for certain purposes.

4. Data Brokers

This amendment requires “data brokers” – defined as a “business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship” – to register with the California attorney general.

5. Miscellaneous amendments

a) A one-year exemption  to be provided for personal information exchanged in certain business-to-business communications.

b) A covered business does not have to collect or retain consumer information for CCPA purposes that it would not otherwise collect or retain in its ordinary course of business.

c) Businesses must disclose to consumers their right to request specific pieces of information a business has collected about them, and includes some changes to the CCPA’s exception for consumer-credit information covered by the Fair Credit Reporting Act (FCRA)

6.  Exemption from Toll free phone number

An exclusively online business with a direct relationship with a consumer need not provide a toll-free phone number to which consumers can submit a request for disclosure of information. It need only provide consumers with an email address.

Additional clarification in the form of draft regulations is expected from the California attorney general in late October or early November.

It is also expected that California may also pass a State Privacy Legislation soon. Since many other states (16 by last count) are following the steps of CCPA, the changes in CCPA is likely to have wide impact on the Privacy protection regime in USA.

There is a need to closely watch the developments in the Privacy regime overtaking USA for Indian businesses to structure their compliance measures.

Naavi

Posted in Cyber Law | Leave a comment

Data Is Always Evolving

One of the myths that is being perpetuated by Data Protection Regulations is that there is some thing called “Personal Data” and some thing called “Sensitive personal data” which companies collect and which needs to be protected.

The regulators however forget that in a corporate environment several kinds of data keep flowing in and out. It is not always that a “Data Set” like a Name, Address, E Mail, Mobile, health Data, Financial Data etc come at one single point of time so that they can be immediately tagged and protected as required. It happens only in cases where a company puts out a Web Form and collects some designated information from a source. In such cases a “Consent” can be obtained and data protection compliance can be achieved.

However in most cases, data flows in in different contexts and through different channels often in unstructured format. A company could have received the name and E Mail address an year back and today the same person’s further data may just land within the Data environment of the organization. When the new information is fused to the earlier information, the simple data grows into bigger and more sensitive form.

Similarly, it is possible for a set of available data can be disintegrated and a sensitive data may be converted into a non sensitive data and also anonymized data.

The fact that a personal data is always a “Set” of  elements one of which is the core identifier of a living natural person and there is an organic growth of the data into different forms is not adequately captured in the data protection regulations. Some of the data protection regulations define individual identifiers themselves as “Personal Data” without recognizing that any identifier not being identifiable with another “identity” of a living individual cannot be called “Personal Information” is often missed.

As an example we often hear, IP address is Personal Information or Physical Address is “Personal Information” etc. Though data protection practitioners try to enable their processes to identify the conversion of the status of data from one state to the other through manual intervention or with the use of AI, this remains a lacuna in the regulatory definition of data.

The New Theory of Data has to therefore capture in its Data Definition that “Data is Dynamic”, “It evolves over time” and “Consent” obtained when the data is in its Zero day status fails when a new data element comes within the radar of the Company.

An example could be that a Company may have a group photo of people many of whom is not known to it. Suddenly, one of the person becomes identifiable because he sends in a job application with a photo. Now the Group photo which is already in the data system as of a past date becomes an “Identifiable” data. This dynamic nature also affects the Data Portability and Data Erasure requests.

The New Theory of Data needs to recognize these anomalies and ensure that there is a valid explanation of these special instances of data within the theory of data.

Similarly “Data as a Property” of the Data Subject or Data as a productive asset of an organization is not properly captured by the present technical or legal approach to data.

Thus the current system of understanding data from the perspective of technology and law appears to be posing contradictions because each domain of stake holders have at different points in time tried to describe the term “Data” for their own convenience. If these differences are not amicably resolved the Corporate managements will find it difficult to balance the differing demands of the technologists, lawyers and the business managers.

The need for a new approach to understanding data is therefore critical and this new theory should be capable of creating a proper definition for the term data so that all seemingly contradictory views converge under the new theory.

Watch out for more…

Naavi

Posted in Cyber Law | Leave a comment

Data Science Has to Evolve From Technical Perspective…

Data Science is an important area of study in the present days when “Data” is considered as an important economic asset which can be harnessed like “Oil” or “Mined” like Gold.

According to Wikipedia,

“Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.”

The view of most data scientists however is limited to disciplines such as “Statistics”, “Mathematics”, “Computer Science” and “Information Science”.

Data science is considered as a “concept to unify statistics, data analysis, machine learning and their related methods” in order to “understand and analyze actual phenomena” with data. The term “Data Science”  is often used interchangeably with  concepts like business analytics, business intelligence, predictive modeling, and statistics.

The business of “Big Data”, “Machine Learning”, “AI Algorithms” all depend on “Data Scientists”.

For those of us who have watched the growth of “Information Security” as a professional domain, the evolution of “Technical Aspects of Information Security” to “Techno Legal Aspects of Information Security” was clear. Today, the legal aspects of Information Security has taken a firm grip in the Information Security domain. With “Information Security” migrating to “Data Security” and emergence of stringent laws such as GDPR, the future of “Information Security” has slipped out of the hands of the CISOs to DPOs. (Data Protection Officers).

While the IS domain soon realized the importance of “People” along with “Processes”, the transition was fully extended into the third dimension of “Behavioural Science” by practitioners like Naavi.

At present the Data Security domain has taken a further step ahead towards “Data Governance” bringing in the “Business Management” professionals closer to the group of Data Security Professionals.

A similar graduated evolution is now also required in the field of Data Science. The various theories of Data that support the Data Science field as of today work around the technical aspects of Data.  Statistical tools to segregate and detect correlations in a heap of data, drawing predictive conclusions, creating self learning and self improving algorithms are all based on technical perspective of data.

The technical perspective of data is as a “Stream of Binary Notations” as can be read by a “Binary Reading Device”. The “Binary Stream” rests on some surface like the platter of a hard disk or a Compact Disk or a Memory Card. The reader reads the binary stream, passes it through an application that assigns meaning to the data stream and thereafter sends it out into another data processing step or to a data delivery device like the “Monitor” or a “Speaker” for the human being to “Experience” the data.

To enable this conversion of data from a binary stream to a human experience, computer engineers have developed a protocol such as splitting a data stream into some finite bit packs, separate it with limiters, adding meta data to instruct the devices on processing. How efficiently data can be read, multiple data can be aggregated, profiling can be achieved etc are the problems that the Data Scientists try to examine. But in this entire process they deal with “data as a binary stream”.

In “Technical Perspective”, Encrypted data is also a binary stream though it is different from the binary stream of the parent data itself.

The moment we start recognizing the binary stream as a word, sentence, picture, sound etc., we are adding human interpretations of the binary stream. Then Data is no longer in the technical domain only. It has crossed over to the human domain.

If the human observing the data is blind, he will not see any data. If he is colour blind, he may see some data but miss some other parts of the data. If he is deaf, he may miss some sound. If his ear/brain cannot respond to some frequencies then he will hear sounds which are different from the sound which his neighbor is hearing from the same speaker. Even in texts, if the person does not know the language of the text, he will not understand the data.

Thus “Data” is not what the “Binary Stream” suggests. Data is what the human perceives. It is for this reason we say “Data is in the beholder’s eyes”.

Do the Data Scientists of the day factor in this possibility that Data may be different for different people?

Similarly, for law enforcement, when they look at “Data” as “Evidence” the same issue confronts them. What is the data that a person sees in dependent on the technology that converts the binary stream into a  human experience. If the devices (hardware and software) used for the purpose do not do their job as expected, then even the person who is not color blind or deaf will also not see the same data that some body else with another device may see.

For example, a Word document created in MS Word can be read also in Libre Office or some other word processor. But will the reproduction will be exactly same as what one can see in MS Office? .. is doubtful. Similarly, a web document may look differently in different browsers and with different configurations. Hence the data as seen by a human even if he has no disabilities is still dependent on several aspects and is not the faithful rendition of the binary bits which the data scientists recognize as the data.

The principle is similar to the Einstein’s theory of relativity. Data is not absolute. It is relative to the devices used to convert the binary stream into text, sound or image and further on the ability of the observer to observe faithfully what is rendered.

An ideal theory of data should therefore cannot stop at studying the data only from the perspective of technology without fully absorbing how other factors affect the human experience of data.

Perhaps there is a need to think differently and develop a “New Theory of Data”.

Watch out for more…

Naavi

Posted in Cyber Law | Leave a comment