Time to be accountable

This article of naavi is reproduced from India Legal magazine of February 22, 2020:

Quote:

On December 2018, the central government proposed to issue an amendment to the Intermediary Guidelines under Section 79 of the Information Technology Act, 2000 (ITA 2000). This was neither a new Act nor a new rule. It was only a proposed amendment to a rule placed for public comments.

However, it was challenged as unconstitutional by some activists and referred to the Supreme Court. The government is now expected to present a new version of the rule in the Supreme Court and the industry lobby is already mounting pressure on the centre to bend the rules to their advantage.

Section 79 and the rules therein are meant to bring accountability to intermediaries to prevent certain crimes such as defamation, spreading of hatred and disharmony, inciting violence and such through information posted on websites, blogs and messaging platforms. The role of intermediaries in fuelling such crimes and assisting law enforcement agencies in detecting and bringing to book the perpetrators is undisputed. However, these business entities are averse to accepting any responsibility for preventing their platforms from being used for fake news to disturb the community and as a tool for anti-social elements.

An internet intermediary, incidentally, provides services that enable people to use the internet. They include network operators; network infrastructure providers such as Cisco, Huawei and Ericsson, internet access providers, internet service providers, hosting providers and social networks such as Facebook, Twitter, Linkedin, etc.

The use of fake videos and Artificial Intelligence (AI)-based content for posting malicious material has made the problem more acute since the amendment was first proposed. Two of the most contentious aspects of the proposed amendments are that the intermediary is required to trace the originator of a message that flows through his platform and that he should deploy technology-based automated tools for proactively identifying, removing or disabling public access to unlawful information.

Objections have been raised on the ground that the intended measures are “technically infeasible”, infringe on “privacy” and put restrictions on “freedom of expression”. Given the propensity of courts to react favourably whenever activists quote Articles 21 and 19 of the Constitution, the industry lobby expects a climbdown from the government. After all, the government had buckled under their pressure when it diluted data sovereignty principles in the personal data protection act by dropping “data localization”.

The challenge before the Court is now two-fold. The first is to realise that excuses based on technical infeasibility are false and such measures are already being used by the industry for compliance with other international laws such as General Data Protection Regulation (GDPR). The second is that “national security” is as much the duty of the government and a fundamental right of citizens as the protection of privacy or freedom of expression of certain other individuals. The law should not allow disruption in the lives of innocent persons while protecting the rights to privacy and freedom of expression of some activists.

At present, most large intermediaries do scan the messages that pass through their services to identify the nature of content so that appropriate advertisements can be displayed when the receiver of the message reads them. Most leading companies, including Facebook, also use AI to read the messages and profile the users. Hosted content is also moderated and scanned for malicious codes as part of information security measures. Hence, the claim that it is impossible to make a reasonably effective check and flag objectionable content is not acceptable, particularly in the case of large intermediaries like Google and Facebook. As regards the proactive removal of content which is “unlawful”, this involves the judgment of intermediaries. However, if they are ready to proactively identify potentially objectionable content, the government can always suggest a mechanism for reviewing the tagged content and get it moderated.

Most data managing companies undertake a similar “discovery” exercise when it comes to complying with laws such as GDPR. There is no reason why they should not apply similar “data discovery” tools to identify offensive content and flag it for manual supervision. The technology is available and being used by the same companies who are resisting the request of the government. The Court should reject such claims. Their bluff needs to be called out.

We may also note that the Personal Data Protection Act, which is expected to be a law soon, has also brought in a provision whereby social media intermediaries have to provide an option to users to get them “verified” and the “verification” should be visibly presented with the account.

In other words, it will be mandatory for social media companies to identify the owner of a message and therefore make him accountable. In the case of WhatsApp, it must be mentioned that what is required is not “reading of the message” which is objected to from the “privacy” angle as the information may be encrypted, but only to identify the origin of a message. This can be technically achieved by tweaking the header information of the message and incorporating a checksum identity of the message. This can be identified at the server whenever it is forwarded.

In view of the above, the technical infeasibility objections for not being able to trace the origin of a message is unsustainable in the current age of technology using AI. These are false excuses.

However, while issuing the new guidelines, the government may have to recognise that some views on Section 79 have been expressed by the Supreme Court in Google India Private Limited vs Visakha Industries and the proposed amendment has to be compatible with the views expressed therein. This case involved a complaint of defamation and the non-removal of the content by Google when demanded. It also opened a discussion on the concept of “due diligence” as per the version of Section 79 in ITA 2000 and an amendment made in 2008 which became effective from October 27, 2009.

The final outcome of this judgment was focused more on the applicability of the law with reference to the date of the incident. But during the course of the judgment, some important principles of international jurisdiction and the scope of “due diligence” emerged. These would be relevant in analysing the proposed intermediary guidelines. It may be noted that the original version of Section 79 required “due diligence” to be exercised to “prevent the commission of offence”. The due diligence under the old Section 79 had not been expanded with any notification of rules and hence was an open-ended responsibility.

In the case of the amended Section 79, which is applicable now, the law requires that “the intermediary observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe in this behalf”. It, therefore, extends beyond “prevention” when the data enters the control of the intermediary and monitoring throughout its lifecycle.

Additionally, the concept of “due diligence” has been detailed in the intermediary guidelines on April 11, 2011, which is now proposed to be replaced with an amended version. The Court recognised that the amended Section 79 provided protection from liability not only in res­pect of offences under ITA 2000 but other laws as well which was welcomed by the industry as an expansion of the safe harbour provisions.

At the same time, we need to observe that the scope of Section 79 has expanded significantly in terms of how the government may exercise its regulatory powers and also the level of control that the intermediary is expected to implement as part of the compliance requirements.

In view of the vindication of the current version of Section 79 in the Visakha judgment and the lack of sustainability of technical infeasibility objections raised by the intermediaries, they seem to have no option but to accept accountability that the amended guidelines prescribe. The challenge mounted in the Supreme Court may, therefore, end up only with a clarification on the procedures related to content removal.

However, the Court could suggest some standard measure to ensure that between the period when the victim notices the harm and brings it to the knowledge of the intermediary and until a Court comes to a decision, he would get some interim relief which is fair to both parties. Hence, if a notice for removal is received by an intermediary, pending an order from a Court, he should exercise caution to prevent continuation of the alleged damage. Ignoring the knowledge of alleged damage would neither be legally wise nor ethically justifiable.

In such cases, the content may continue but it should be flagged as “reported objectionable vide notice received from ….” with a hyperlink to the copy of the notice. The flag may be removed after a reasonable period such as 90 days if no court order is received.

This measure will ensure that the delay in obtaining court orders does not continue to harm the victim to the same extent as it otherwise would. If such a measure is not available, every complainant will seek relief in the form of an interim order to block the content.

If such a request is agreed to by the trial court, the content remains blocked until the case is settled which may last for years. It would be good if the suggested procedure of dispute management is included as part of the intermediary guidelines.

The writer is a cyber law and techno-legal information security consultant based in Bengaluru

Unquote:

Naavi

 

Posted in Cyber Law | Leave a comment

First Set of Certified Data Protection Officers in India are set to emerge

February 23rd 2020 is set to be a historic day in the development of Data Protection eco system in India. It is the day when the very first batch of professionals are facing their challenge for getting certified by the Foundation of Data Protection Professionals in India (FDPPI).

The participants of this initial batch are those who undertook a 6 week long online training provided by Cyber Law College (www.cyberlawcollege.com).

The current batch is a small batch of foundation members of FDPPI and will form the backbone of such certification programs in the future. This batch has been trained and the certification program is being administered solely by Naavi.

After the successful conduct of this program, the Certification mechanism will be taken over by FDPPI and more such programs both for training and for Certification will follow.

This Certification would be titled “Certified Data Protection Officer in India-Level 1” and incorporates the awareness of the law as of date. It will be followed by higher levels in due course as additional skills are input at different levels including  Advanced awareness of the law as it emerges, the Technical Skills, the leadership skills and the awareness of international laws. In totality this Certification would be unique and is conceived at a level higher than the currently available systems in other countries.

While many of the Indian professionals do get certified through international agencies, FDPPI envisages creation of “Ethical Data Protection Professionals” who have their primary expertise in the Indian market.

This indigenous system of Certification is considered essential as the principle of “Data Sovereignty” is embedded in the Indian data protection laws and needs to be incorporated into the system of training and certification.

The motto of FDPPI is to create “Knowledgeable, Skilled and Ethical Data Protection Professionals” and the Certification program would be a significant step in this direction.

Naavi

 

 

Posted in Cyber Law | Leave a comment

Media Mischief puts Cognizant in an embarrassing light

Cognizant is a respected MNC with significant stake in Indian data processing industry. It is not a company of the type Facebook, WhatsApp or Google who are known to have woven their business around harnessing (Exploiting?) “Personal Data” and profiling of data principals/subjects.

But today’s ET carries an article with the headline “Cognizant sees data protection bill increasing costs, obligations”.

The article also uses the following photograph of Cognizant’s building for the article.

Cognizant sees data protection bill increasing costs, obligations

This is a clear indication that the reporter wants to attribute the opinion conveyed in the article to Cognizant and no other company.

The body of the article says

“IT services provider Cognizant has said the Personal Data Protection Bill, 2018 could impose stringent obligations on it for localisation of sensitive data, and along with regulatory changes in other countries may lead to additional compliance costs.”

“Complying with changing regulatory requirements requires us to incur substantial costs, exposes us to potential regulatory action or litigation, and may require changes to our business practices in certain jurisdictions, any of which could materially adversely affect our business operations and operating results”.

The second paragraph of the article mischievously makes another quote from a company called Teaneck, a New Jersey based form which is more directly critical of the Indian law  and says

“If enacted in its current form, it would impose stringent obligations on the handling of personal data, including certain localization requirements for sensitive data. Other countries have enacted or are considering enacting data localization laws that require certain data to stay within their borders,” .

Gartner’s quote in the same article says ““Some short-term disruptions are possible, but the answer to this would be significant investment in data governance. It will impact smaller providers more because they have to create special provisions, change processes and systems of transferring data,”.

The journalistic mischief is in presenting the facts as if Cognizant is critical of the introduction of PDPA in India. It is well known that there is a lobby in India which is critical of the Bill for various reasons. The article clubs Cognizant with such companies.

It is to be noted that in the 2018 version there was opposition to the Data Localization aspect. Unfortunately the Government yielded to the pressure of the industry and diluted the provisions in the 2019 version which is currently under discussion. This ET article suggests that Cognizant is still unaware that PDPA 2018 has now become PDPA 2019 and is set to become PDPA 2020. The reaction of Teaneck also indicates that they are not aware of PDPA 2019.

Assuming that the comments are attributed to the annual reports, it is fine for the management to acknowledge that there is a changing legal landscape and there will be additional costs associated with it. This is a normal expectation from any financial report to the share holders.

But the reputed publication like ET should remember that quoting a dated comment in the current context where public are watching the comments on the PDPB 2019 being submitted to the Parliamentary committee, is a misuse of journalistic freedom. Had the reporter added a sentence that a revised version of the bill is now under discussion with a diluted data localization measure, he would have been more truthful.

I request Cognizant India to issue a clarification that it is not the view of Cognizant that the introduction of PDPA for the protection of Privacy of Indian Data Principals as directed by the honourable Supreme Court is undesirable and imposing any unreasonable costs on its operations in India.

Naavi

Posted in Cyber Law | Leave a comment

Is Insurance Industry ready for PDPA?

On 7th February 2020, I attended a day long seminar in Hotel Trident, Mumbai organized by the National Insurance Academy Pune jointly with Swiss RE.

The program titled “Digital Disruption..Embracing Digital Innovation in [RE] Insurance business” was a grand success and well attended by all the Insurance Professionals. It was inaugurated by the Chairman of IRDAI in the presence of the CMD of LIC and other dignitaries.

While there was interesting discussions on the innovative use of technologies in Insurance, there was also a discussion on Cyber Insurance.

Despite the enormous enthusiasm that the industry is showing towards the adoption of technology, it was observed that the industry appears to be significantly lagging behind the developments in the field of Cyber Insurance and needs to re double its efforts in developing the Cyber Insurance products and services.

I had observed in my earlier article “Golden Era ushered in for Cyber Insurance industry through personal data protection act of India”  that there was a huge opportunity begging to be harnessed by the industry consequent to the Personal Data Protection Act that is on the anvil.

However the industry appears to be even now looking at only how to adopt IT in their traditional Insurance business and the level of adoption of risk assessment and insurance coverage in the Cyber Space is in very nascent stages. It appears that the insurance industry in India will miss the Gold rush arising out of PDPA.

More importantly, if the Insurance industry does not gear up to the needs of the industry which will be embracing the PDPA, the industries who will try to adopt PDPA will be left high and dry unable to get adequate coverage they would be looking for. In the process there will be many insurance contracts which are likely to be written without a proper understanding of the inherent risks covered. In a way the industry has to go through a period of blind PDPA Risk coverage policies which will be only on paper and would neither be useful to the insurer or the insured.

During the discussions it was a surprise to note that there was no mention of the recent Breach Candy hospital data breach which should have actually dominated the discussions if there was a proper appreciation of the impact of the industry had it come after the PDPA was in force.

There was also a lot of discussions on the use of AI in Insurance which needs to be moderated and adopted to the advent of the PDPA. There was a complete lack of the recognition that many of the AI solutions will have a serious conflict with the PDPA.

It was interesting to note that the IRDAI has recently introduced a “Sand Box” system for the insurance industry to test new products. Since the PDPA is also coming out with a Sand Box concept of its own, the users of new Insurance Products based on the use of AI will need to contend with two Sand Boxes, one for the use of personal data in developing profiles of the insured which will be under the under PDPA and the other for the structuring of the insurance policy.

Naavi pointed out that PDPA will usher in new challenges such as providing a cover for the “Administrative Fines” which will conceptually mean coverage of failure to do the obvious. The industry will have to decide on the coverage based on the reasons for which an administrative fine is imposed. If the reason is an external cyber attack, the coverage may stand. But if the main reason is failure of the internal systems then there could be a resistance from the insurance industry to honour a claim.

Naavi also pointed out the difficulty in valuing the personal data since its value in the hands of the data fiduciary/processor would be varying as it travels through a life cycle. Even the data ownership may change during the lifecycle of personal data requiring proper capturing of the ownership in the insurance contracts. (Some of these problems would be evident to readers who go through Naavi’s recent book on PDPA).

Naavi also pointed out the conflict with the general principle of “Co-Insurance” when the limit on administrative fine under PDPA is defined as 4% of the Global turn over. Since this becomes the bench mark of “Insurable Interest” for a company, if the actual policy for administrative fines is less than 4% of global turnover, then there could theoretically be an “Under-Insurance” of the liability.

Additionally the PDPA Risk is almost always a risk of “Consequential Loss” while the primary risk is one of a “Cyber Crime” arising out of information security failure. Hence the risks covered under the existing Cyber Insurance policies themselves expand to invoke the administrative fines under the PDPA unless they are specifically excluded.

In view of all the complexities that the Cyber Insurance as well as the PDPA Risk insurance involves, a time has come for the industry to think if there is a need to make a major surgical change to the Insurance law in India on the lines of what China has done, by giving up the principle of “Utmost faith” to a contract of “Honest disclosure”.

Without this major change in Insurance law, it will be difficult for the industry to provide the required risk coverage to the industry arising out of Cyber Risks and PDPA risks.

Hope the IRDAI and the Government will take a look at this requirement.

In the immediate future, IRDAI has to try to establish some codes and practices that it can suggest to the DPA so that the insurance industry is able to adopt to the PDPA without much of a problem. If necessary, IRDAI should set up an expert committee for this purpose at the earliest.

One of the requirements that will arise in the context of the inability of the insurance industry to come up with a suitable product is for the other sectoral industry regulators come up with a concept of “Peer to Peer Insurance” through the constitution of a “Data Insurance Fund” on the lines similar to the Deposit Insurance and Credit Guarantee Fund” in the Banking industry. I will expand on this concept in subsequent articles.

Naavi

Also Refer:

Cyber Insurance Pricing.. Finextra

 

Posted in Cyber Law | Leave a comment

PDPA Risk Insurance

India is in the threshold of a new legislation called Personal Data Protection Act (PDPA-2020). One of the most striking factors that this legislation represents is that organizations processing “Personal Data” in any form, including the Government departments will here-after  have to worry about a new kind of financial liability that they may face. It is the risk of being fined by the Data Protection Authority for “Non Compliance of the provisions of the Act”.

While the organizations that process the personal data need to be ready with the knowledge and preparations of how to stay compliant with the law, one of the solutions that every personal data fiduciary/processor in India would be looking forward to would be an Insurance policy with which they could get themselves covered.

It is possible to consider that the administrative fines that may arise consequent to non compliance of PDPA 2020 can be also considered as a consequential loss of running the business and hence could be technically covered under the current Business related insurance policies.

However, since the PDPA administrative fines were not envisaged when the policies were underwritten and the amount involved could be as high as 4% of the global turnover of the company, it is difficult for the Insurance companies to consider the risk covered unless a fresh endorsement is made and additional premium collected.

The organization will therefore have to take a view on what risks to be insured under PDPA, whether to restrict it only to first party risk of administrative fines only or include the third party risks of payment of compensation to the data principals.

The Insurance companies also need to structure a policy that suits the requirements of the PDPA.

We are certain that the Insurance Companies in India are far from thinking on structuring a policy for  PDPA risk coverage and it is possible that they will look to the west for re-insurance terms before they start underwriting the risks.

The PDPA risk coverage will be complex because the underlying asset is Personal Data which is intangible, goes through a life cycle of varying value, the asset ownership is unclear, losses are difficult to estimate, etc.  The fines arise if there is negligence in implementation of PDPA compliance and whether the insurance companies relish insuring negligence is a moot point.

May be there is a lot to debate in this field and the discussions have just started..

Naavi

Posted in Cyber Law | Leave a comment

Print Version of the book on Personal Data Protection Act by Naavi

Naavi.org is glad to announce that the print version of the book Personal Data Protection Act (PDPA 2020) written by Naavi based on the Bill presently before the Parliament would be available shortly.

The book is released now before the passage of the Act with the objective of making some reading material available to the Parliamentarians who will be discussing the bill for passage and  also for all those persons who have to present their views to the Parliamentary Committee.

The book is being released in the next couple of days by the publishers at a market price of Rs 600/-.

A limited number of copies would be made available to the Naavi.org followers at a pre-order discounted price of Rs 450/- . This will be a limited period offer and would be available on request. Exact modalities of how the discount will be passed on would be provided to those who want to avail the offer.

This offer would also be available to all the students of Cyber Law College who have taken the courses through Cyber Law College or Apnacourse.com.

Requests may be sent by e-mail to naavi@naavi.org with the subject line “PDPA2020”

Naavi

 

Posted in Cyber Law | Leave a comment