Supreme Court should mandate addition of the Verified and Unverified status to Social media accounts

There is a debate presently going on in the Supreme Court of India about the responsibilities of the Social Media Companies such as FaceBook and Twitter in preventing “Fake Accounts” and “Fake News”.

It is beyond question that Fake news is a menace which should be stopped. It is actually in the interest of these service providers such as Facebook, Twitter and WhatsApp that they introduce a system that prevents fake news as a means of preserving the trustworthiness of their platform.

We know that due to Phishing, any e-mail or a phone call from a Bank is not trusted by the customer. Even if the call is genuinely from a Bank, today we will consider it as a fraudulent call. This situation should not come to social media also.

Some of the Social Media companies are resisting the request of the Government to take steps to prevent fake news by complaining that any attempt to identify the originating source of a message is an infringement of the privacy rights of the person accused of sending such messages.

In the past Courts have provided an excessive credibility to the Social Media by

a) Considering the forwarding of messages as an endorsement of the content

b) Clicking “Like” button or Re-Tweeting as an endorsement of the content

c) The Double Tick in Whats App as a delivery confirmation of a message and Double blue tick as an acknowledgement of receipt (Even when there is no Section 65B Certificate)

This excessive reliance by the Judiciary is itself the reason for  increasing the possibility of the socialmedia being used for planting fake stories .

Fake Message Prevention needs technical measures both at the time of creation of the account and also when an objectionable message is sent.

The Social Media should therefore be mandated to observe some general security practives such as the following:

  1. At the time of opening of a social media account, the details of registrant along with the meta data such as the IP address, Device ID (eg: IMEI number, Bios ID) must be captured by the service provider and preserved until at least 3 years after the account is closed. In case of any legal dispute arising on the account, the information should be considered as “Evidence” and archived permanently.
  2. At the time a message is sent which is suspected to be “False” if the law enforcement demands the origin of the message, then the social media manager needs to produce the information such as the Mobile Number or IP address, the registered address of the account holder etc.
  3. Quite often the Social Media companies hold out an excuse that content is encrypted end-to end  and any request of the law enforcement requires “Decryption” which technically may be not feasible.It is difficult to believe this contention since law enforcement requests are only for the information about the originator of the message and not the message per-se. It is only after a person who has a copy of the message filing a complaint that the investigation is taken up. At this time, the content is already known and there is no need for WhatsApp to provide the content.The previous messaging device ID is not part of the encryption but is only part of the meta data. Hence there is no reason to accept the contention of these companies that they are not in a position to provide the details as asked.
  4. The request for decryption may come only when there is a demand under Sec 69 of the ITA 2000 which can be invoked under very limited cases and only when national interest is involved. There are sufficient checks and balances in law to prevent misuse.

    If some law enforcement personnel misuse the provisions and intercept without authority, the Act considers such interception as “Unauthorized Access” under Section 66 of ITA 2000.  Hence there is no reason why these companies should be allowed to avoid cooperating with the law enforcement making a request using the due process. The data localization requirement under PDPA has actually originated because the Google and FaceBook donot cooperate with the law enforcement and provide the information required by the police. Supreme Court should not allow these companies to bully their way through and avoid responding to the genuine the law enforcement requests.

Linking of Aadhaar

If prevention of creation of Fake Accounts is a necessity, then the means of ensuring this could be insisting on the “KYC” of the customer at the time of opening of social media accounts.

Face Book, Google as well as WhatsApp want to use their members to transfer money through the accounts to “Friends” and “Contacts”.

If therefore the identity of account holders is not verified properly these accounts become conduits of unaccounted transactions. 

FaceBook in particular is dangerous because they are promoting their own Crypto Currency (Libra) and will be a direct threat to the economy providing a conduit for black money legitimization.

If therefore the Government wants a good KYC using Aadhaar or PAN Card , Digital Signature Certificate etc., as the basis, it is a fair request.

Since the Supreme Court has not been in favour of private sector using the “Aadhaar” for KYC purpose, these agencies can make use of “Virtual ID” or “Offline Verification methods”.

Presently, Twitter has a “Blue Tick” facility to mark accounts which have been “Verified”. Similar verification can be introduced for other social media accounts also.

Even if  Verification  is not made mandatory if a provision is available,  a majority of Indians would opt to have the “Verification Tick” as a prestigious tag.

At the same time, those who opt out may be given a “Red Cross tick” to show they are “Unverified”.

In due course, the  “Identified” account holders will be in majority and push the “Fake Account holders” to the category of “Untrustworthy accounts”. In a way this will automatically segregate the “Blue Ticked Verified Account holders” from the “Red crossticked unverified account holders”.  This will reduce the incidene of fake news substantially without any further effort.

Since “Consent” could be a basis for many other “Sharing of Sensitive Information” as per PDPA, there is no reason why we should not allow account holders voluntarily submitting their Virtual Aadhaar IDs to get their ID verified at the time of opening of their Social Media accounts.

When the Supreme Court hears the petitions in this regard, it should therefore take into account the above suggestions and help in improving the credibility of the Social media.


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

New Version of Course on PDPA

Cyber Law College, a division of Ujvala Consultants Pvt Ltd has introduced a new version of a Course on PDPA (Personal Data Protection Act).

This version would include additional modules on Data Governance Framework as well as a discussion on the Data Protection Challenges in the New Technology areas such as Artificial Intelligence, Big Data etc.

The revised Course content is now as follows:

Course Contents:

      1. Evolution of Privacy Law in India. (ITA 2000-ITA 2008-Puttaswamy Judgement.Etc.) and
      2. Understanding the Concept of Privacy and its relation with Data Protection
      3. Applicability, Exemptions, Transitional Provisions
      4. Data Principal’s Rights and Data Protection Obligations
      5. Grounds of Processing
      6. Transfer of Personal data outside India
      7. DPA and DPO
      8. Compliance Obligations
      9. Penalties and Offences and Grievance Redressal mechanism
      10. Data Protection Challenges under New Technologies
      11. Data Governance Framework
      12. Interactive discussion

The 12 sessions would be divided over 6 weeks with two sessions per week.

(A Free additional module will be held to cover the changes if any after the Act is passed)

The tentative date of commencement would be in the middle of November. Exact date of commencement would be announced later.

The students need to be online at the time the classes are conducted but can join the sessions through computer or mobile.

At the end of the classes, the students will take an online test and there after will be eligible for certification. Certificate will be co-endorsed by the Foundation of Data Protection Professionals in India.

Intended students may register at the earliest.

For more information , visit

Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Sweden’s First Fine under GDPR is bizarre and tragic

Some people are gloating over the “First Fine under GDPR in Sweden” where a school has been fined 200,000 euros for testing Face recognition technology for attendance monitoring.

This move deserves unequivocal condemnation and probably is an example of how regulators should not function. It actually appears like the Swedish DPA having scored its first victim who was a soft target just to show that it has the power.

According to one expert,

“The school cannot use consent as a basis for carrying out this processing of personal data, as the individuals in question have a dependency position as pupils in the school,”

While the legal point of ” Undue influence” is well recognized, this also extends to many other situations including employers taking consent from employees.

This decision is a landmark no doubt on how GDPR is used to harass uses of technology.

Even if the authority was unhappy that the school did not consult them with a DPIA before “Testing” the software, it would have been reasonable if the School had been warned and given a nominal fine of One Euro.

Then public would have appreciated the gesture and the intention of warning others.

It is possible that the school may be rich enough to bear the fine but the principle of treating an educational institution with a heavy fine for testing technology is unacceptable. If this becomes a precedence, every organization needs to take prior legal opinion before any operational decision on implementation of technology. Many technology implementation projects would hit a roadblock. This is not good for the future of technology.

I wish this order is over turned as excessive application of regulatory power.


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Today is Digital Society Day of India

Today is 17th October, a day of special significance  to all those who use internet and computers or mobiles in India.

On this day in 2000, the Information  Technology  Act 2000 was notified.  This gave  legal recognition  to electronic  documents,  the method of signing an electronic  document  as well as a method for presenting an electronic  document  as evidence  in a Court.

All this together  ushered in the Digital  Society  of India.

I will  be celebrating the day at NALSAR Hyderabad  with  the students.

Hope  some day the MEITy will  realize  the importance  of the day and declare  it as the Digital  Society  Day of India officially.


Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment

Anonymization and Avatars of Data

“Anonymization” takes personal data out of the purview of most data protection regulations. Hence it is one of the objectives of data protection compliance managers to mitigate the data protection risks by pushing part of the “Protected Data” out of the “Protection Zone” by “Anonymizing it.

In the Indian PDPA, the Data Protection Authority is eventually expected to provide an explanation of when a “Personal Data” is deemed to be “Anonymized”.

For an organization, “Data” to be Governed includes “Personal Data” as well as “Anonymized Data”. Just because a certain data element is anonymized, it may not mean that it is no longer an asset that need not be secured. In fact, many organizations may acquire “Identified Personal Data” at a cost and there after spend more to anonymize it. So, Anonymised data may be more valuable as an asset than the identified data from the “Cost of Acquistion” point of view.

However, the need to secure “personal data” because of the regulations and the possibility of a heavy financial penalty in case of failure introduces another element of “Opportunity Cost” to the identified personal data arising out of data breach and/or non compliance of data security regulations.

A Corporate Manager who is interested in “Data Governance” (Data Governance Officer or DGO) is concerned both with the “Cost of Acquisition” as well as the “Cost of non compliance”. The “Data Protection Officer” (DPO) on the other hand is interested only in the “Non Compliance Cost”.

“Anonymization” is a process that acts as a gateway between the DGO’s territory and DPO’s territory. The DGO hands over the data as “Identified Personal Data to the DPO for Compliance management. At the same time, he would have retained what is classified as “Anonymized Data”. The anonymized data may go for a separate shop floor for a process of adding value through “Data Analytics”.

If however, the “Anonymization Process” is not good enough, then the organization would be exposed to the re-identification risk. The demand for penalty in that case would come from the supervisory authority to the DPO.

DPO is therefore responsible for the “Adequacy” of the “Anonymization Process”. In fact if a company adopts “Anonymization” as a part of its Data Management policies then the “Anonymization Process” should be subjected to a DPIA (Data Protection Impact Assessment) by the DPO.

Probably these are situations when there would be a conflict between the DGO and the DPO. While the DPO may blame the DGO for imperfect anonymization, DGO may blame the DPO for “Motivated Re-identification” in a downstream process.

Let us leave this conflict to be resolved by the  proper structuring of the “Data Governance Framework” which should include the “Data Protection Framework as a subset”.

In the meantime, let us briefly look back on  Naavi’s Theory of Data and see whether this theory can recognize the journey of data from “Personal Data” status to “Anonymized Data” status.

In the theory of data, we had included a “Reversible Life Cycle Hypothesis”. This was part of the three hypotheses that made up the theory including the other two hypotheses namely the “Definition hypothesis” and “Additive value hypothesis of ownership” .

The essence of the theory was that “Data is Constructed by technology and Interpreted by Humans”, Data undergoes a lifecycle of birth to adulthood to different stages of maturity and then death, providing ownership to different persons for different value additions”.

If we try to trace the life cycle of personal data through anonymization we can identify that data goes through different phases of development in which it will assume different avatars as shown in the diagram above.

A Company may normally acquire data in the form of a limited personal data collected in a web form or when a netizen clicks on a web advertisement or visits a website. At this point of time the company may get some limited identity parameters such as the IP address of the person and possibly the name and email address he fills up on a web form. This Limited personal data later may acquire the status of an “Irrevocably identifiable personal data”  if some elements of identification such as a PAN number or a Mobile number etc is collected or become a sensitive personal data if the collected data elements include specific data elements.If processed into a profile it may become profile data.

If the company removes the identity parameters and keep it separately, it may become “Pseudonymized data”. If the identity parameters are irrevocably destroyed the data may become “Anonymized Data”. The anonymized data may be aggregated into big data.

In between all these categories, part of the limited identity personal data or identified personal data or anonymized data may be called “Community data” if it contains the data of a group of individuals.

In all the above avatars  the “Corporate Data” is a class of its own and may be further classified as IP data, Business Intelligence data, HR data, Finance Data etc.

While the “Data Protection laws” may apply to Personal data, Sensitive personal data and profile data, Cyber Crime laws such as ITA 2000 will apply to all data including personal data. In future, a Data Governance Act of India may also come to apply to “Non Personal Data”, “Aggregated Data”, “Community Data” etc.

The fact that “Data” exists in multiple forms and one can change into other and back is a point which is well captured by the “Reversible life cycle hypothesis of the Theory of Data”. The fact that different laws may apply to it at different stages is also explained by the life cycle hypothesis. The only difference between the human life cycle and the data life cycle is that data life cycle can be reversed in the sense that non personal data can become personal data and later come back to non personal data status. Humans may not be able to do so except when they are  mythological characters like …Yayati and Puru.

What the Theory of Data highlights is that any regulation which does not take into consideration that “Data” changes its nature in the ordinary course of its usage and a “Dynamic Data” requires a “Dynamic Regulation” will have problems.

In the human equivalent, we have the issue of a law applicable to juveniles being different from the law applicable to adults. similarly law applicable to unmarried may be different from law applicable to married, law applicable to men can be different from law applicable to women, law applicable to Hindus may be different from law applicable to Muslims and so on.

Just as there is strength in the argument that there should be a “Uniform” law for humans, there should also be an attempt to explore if “One comprehensive law of data” can cover both Personal Data and Non Personal Data.

In view of the important transition of applicable regulations when data crosses the border of anonymization, the management of the anonymization gateway is a critical function of Data Governance.

One debate that has already come up is whether there can be a “Standard of Anonymization”?

If so, how will it be different from de-identification standard which defines certain parameters as “Identity parameters” and if they are not present in a data set, the data set is considered de-identified or otherwise it is identified”.

The “Anonymization standard” cannot be that simple since it should be considered computationally infeasible to re-identify an anonymized data.

“Computational Infeasibility” of re-identification comes from the erasure of the “Meta Data” which needs to be irrevocably removed. We all know that if we create a word document, the details of the author is perhaps known to “Microsoft”. If therefore the document is to be anonymized, we need to check if whatever meta data is associated with the document and wherever it is stored, is permanently destroyed.

“MetaData identifier Destruction” could perhaps be the difference between the “De-identification/Pseudonymization” and “Anonymization” .

In Forensic destruction of data, early DOD standards required data erasure for several times  before a data holding device is said to be sanitized. This implies that even when data is forensically erased, a certain number of repetitions are required to ensure that the process cannot be reversed by an intelligent de-sanitization algorithm.

The essence of this “Anonymization” through forensic over writing of data bits is to randomize the overwriting process so that it cannot be reversed.

The standard of anonymization that can be recommended to DPA is therefore not necessarily over writing all bits to be sanitized with a zero bit several times.

It can be different and is aimed at randomizing the binary bit distribution in the data holding device. An example of such a process could be..

a) Overwrite all the bit sets that represent the identification parameters with zero but in a random sequence. (This presupposes that the data set can be divided into identity parameters and other data associated with the identity parameters)

b) Repeat by over writing all the bit sets once again with say 1 again in a random sequence

c) Repeat by spraying zeros and ones randomly on all the data bits in the zone

This process may leave a random distribution of zeros and ones in the selected zone which cannot be reversed. As long as the rest of the data does not contain any identity parameters, the data can be considered as “Anonymized”.

May be technology experts can throw more light on this.



DOD standard for data erasure

Print Friendly, PDF & Email
Posted in Cyber Law | 3 Comments

The Roadmap of PDPA

Personal Data Protection Act of India (PDPAI) by whatever name it will be finally called is expected to be tabled in the winter session of Parliament. (See Report here). Though the Government is under an obligation to the Supreme Court in the Aadhaar case to pass the law at the earliest, this session is likely to be also kept occupied  with the proposed Uniform Civil Code Bill. Hence it is not clear if substantial progress can be made on the passage of the bill during the session.

The industry lobby is however interested in the deferment of the bill until its demand on dilution of the  “Data Localization” requirement is conceded.  One of the tricks which may be used is to push the bill into a Standing Committee which may delay the passage by an indefinite time.

Though the bill may require some final touches after it is presented, we must appreciate that the bill was drafted under the direction of Justice Srikrishna and would have been further refined after receipt of public opinion.  During the discussion in the Parliament itself more refinements will come up for discussion. Hence the need for sending it into a standing committee is low. But the vested industry interests would do their best to ensure that the passage of the bill is delayed by insisting on the bill being sent to the standing committee.

Once the bill is passed by both the houses and gets the assent of the President, the Act will become effective.

Government may not exercise the discretion to make the “Notified Date” different from the “Notification date of the Act” as provided for under Section 97 though a window of 12 months has been provided for the notification of the “Notified Date”.

On the Notified date, the power to make rules and establish the DPA will be with the Government. Within the next 3 months the DPA needs to be appointed. This will be a body of 6 persons with a designated chair person.

Once the DPA is formed and the infrastructure such as the office place and secretariat is provided, the responsibility for further action shifts to the DPA.

The first phase in the road map will therefore be the establishment of the DPA and nothing more.

Subsequently, the DPA will have to draft several regulations as “Rules” and notify the same through a Gazette notification.

Before 12 months from the “Notified Date” DPA will bring out the first set of regulations which will consist of the “Grounds of Processing of Personal Data”. At this time the DPA has to define what is “Personal Data” and what is  “Anonymised Data” besides clarifying the applicability of the Act to processing carried out outside India by Indian and non Indian entities.

“Anonymisation” has been defined under the Act as under

Anonymisation in relation to personal data, means the irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified, meeting the standards specified by the Authority.

Personal data has been defined under the Act as under

“Personal data” means data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, or any combination of such features, or any combination of such features with any other information;

Both these sections need to be elaborated in the rules indicating what is not a personal data and what does not constitute “Anonymisation”.

Additionally the “Codes of Practice” which will cover the substantial aspects of the regulation also “Within 12 months” from the date of notification.

The Government may chose different notification dates for “Notifying the Grounds of Processing as per Section 97(5) and the code of practice as per Section 97(6).

The rules regarding Cross border restrictions for transfer would be notified on a searate date  as per Section 97(7)

The residual regulations would be notified within  18 months of the notified date as per Section  97(8) and this date may be different from the date under 97(7).

The entire road map as per Chapter XIV  is captured here

In the industry there is already some efforts to provide inputs to the Government on how the regulatory process needs to be streamlined. The effort  of select private entities to be part of the regulatory process is to be appreciated though excessive concern is not warranted. For Government legislation is a day to day affair and the officials are well equipped to go through the process systematically.

We may however continue to provide inputs on some of the more technical and legal features of the regulations.




Print Friendly, PDF & Email
Posted in Cyber Law | Leave a comment