Advertising Industry needs to wake up to the demands of DPDPA 2023..1

Naavi as part of his career development had been in the advertising industry for around 11 years and has closely participated in the activities of a full service advertising industry which creates brands, builds brands, understands consumer behaviour with research, reaches out to consumers, creates effective communication to pass on a message to the masses. Naavi’s involvement in advertising was during the period when Internet made an entry and hence advertising industry was transforming from News papers to TV medium with advertising on websites just appearing on the horizon. At that time Naavi had also thought of and pursued a patent “Adview Certification” which involved implanting an intelligent beacon on the website to monitor the behaviour of the visitors to develop a realisable metric of visitors like the TRP in TV industry or ABC (Audit Bureau of Circulation).

With this background, if we now look at the developments worldwide on “Privacy”, it appears that the digital advertising industry is one sector which has an existential threat on account of the Privacy laws. While Fintech and Health sector also have many hurdles to cross, they are to some extent manageable. But Digital Advertising industry which is at the root of all marketing activities and has to design communication appropriate to the target audience has a real uphill task  to the extent that many may feel that there is no way the industry can be fully compliant and hence the winner is the one who is good in deception.

The Data analytics industry has two parts to its activities namely analysis of anonymized data and analysis of identified personal data. Data Science industry related to anonymized data may not be affected by the privacy laws if we accept that “Anonymization of previously identifiable personal data” is similar to “Deletion” and does not require any explicit consent of the data principal. However, analysis of identifiable personal data is closely associated with “Targeted advertising” and does face the same problem as the advertising industry. In fact the data analytics of identifiable personal data and digital advertising industry work in close unison and hence their problems are similar.

To understand the issue, let us start with the simplest of simple tasks namely “Sending E Mails without prior consent” offering products or services. At present we call these as “Unsolicited emails” and “Spam”. “Causing annoyance” with repeated unsolicited emails is a punishable offence in some laws. (Also applicable to unsolicited phone calls).

Does this mean that the only way an organization can reach out to its prospective customers is through “Search Engines” and “Voluntary walk in enquiries”? . The unsolicited mobile calls are a little more annoying than unsolicited emails since mobiles calls cause a greater disturbance than the emails. However emails provide an opportunity to respond leisurely and hence are less demanding on the critical time of the receiver.

The Privacy law makers and the advertising industry have to sit together and sort out this issue and whether a polite “E Mail to request permission to send the next detailed email about the service” say once a year should be considered as a permitted one time activity.

The other points of discussion which we may discuss in continuation are..

1.Profiling a customer for the purpose of market segmentation and targeted advertising

2. “Collecting personal information through cookies set by the advertising agencies/adtech companies on the websites of companies” and consent mechanism for the same

3. “Regulation of information collected by an ad agency/adtech company through cookies from one advertising client to be used for profiling and made available to other clients”.

Internationally there is a discussion on the “Diligence Requirements for the Adtech Industry” for demonstrating lawful consent for collecting and selling personal data. (Refer article in iapp.org).

This article flags the efforts of the Interactive Advertising bureau and SafeGuard Privacy tool called IAB Digital Platform. This platform will contain a set of standardized privacy diligence questions that are specially designed for participants in the digital advertising industry. This is a good industry initiative which is required. 

Some parts of the requirements mentioned here were included in the WebDTS concept which FDPPI promoted but observed a frustratingly large number of non compliance. Perhaps in India we need the Advertising industry regulators to start looking at “Compliance to DPDPA 2023” as a requirement to be considered. At present the advertising industry and more particularly the Ad Tech companies would appear to be completely unconnected with the DPDPA 2023. The end users may escape responsibility by stating that the “Ad Service Provider is a Joint Data Fiduciary” and is responsible for compliance of DPDPA. With many of them operating on AI platforms hosted on websites of their clients and the information collected is that of the customer of a customer, there is very little possibility of “Consent” being obtained. 

While compliance activists like us keep pointing out these issues, the compliance subjects continue to feel that the compliance is “Impractical”. The Advertising industry needs to sit together and find a proper solution to this problem at the earliest.

(Comments welcome)

Naavi

Posted in Cyber Law | Leave a comment

Business Contact Address and DPDPA 2023

Naavi

The applicability of DPDPA 2023 to what can be called “Business Contact Address” is a much debated issue in Privacy circles.

DPDPA 2023 is applicable to “Personal data” and there are many obligations associated with the collection and use of personal data. However whether the same rules apply to “Business Contact Data” such as the business email etc is a point which has been left to Privacy Jurisprudents to debate.

In DPDPA 2023, there is one mention of “Business Contact Address” under Section 8(9) where it states “..A Data Fiduciary shall publish, in such manner as may be prescribed, the business contact information of a Data Protection Officer,…”.

This indicates that the term “Business Contact Information” is recognized in Indian law though it is not defined presently under the definitions section of the Act.

The Singapore PDPA 2012 provides a clear definition as follows:

“business contact information” means an individual’s name, position name or title, business telephone number, business address, business electronic mail address or business fax number and any other similar information about the individual, not provided by the individual solely for his or her personal purposes;

Under GDPR, there may be no definition for Business Contact information but given the general approach of GDPR which has extreme views on Privacy, it is a general understanding that if the information relates to an individual then it would be considered as Personal Data even if it is a work email such as vijay@ujvala.com. On the other hand if Vijay is the Director of Ujvala and the work email is director@ujvala.com, most people agree that it is considered as business contact information and “Not Personal Data”.

In the Cavauto S.R.L case the Italian supervisory authority held that an employee who under an email “Customercare@cavauto” stored his personal data could still be considered owner of such data as personal data and it is not accessible by the company without consent. This  essentially upheld the view  that the corporate email account was  personal data. 

However this extreme view of GDPR authorities cannot be considered as a general guideline and needs to be considered an aberration and not a “Precedent”. Judicial authorities often make mistakes and such decisions are over ridden by superior authorities. This is one such incident where we may say that the decision was a context specific decision and not to be treated as determining a jurisprudential view.

Our view has always been that a property like work email which is assigned by the employer, hosted in the server of the employer, with the company also having the power to deactivate on termination of the employee, should be considered as the property of the employer and not the employee. Hence business email without any doubt should be considered as a “Business Asset” and not “Personal Asset”. Hence work email or any corporate identity provided by the company is better considered as ” Non personal data”.

As regards classifying an email address as personal or business, it is also necessary to look at the context.  Since Privacy is the “Right of Choice” of an individual to share what he considers as a “personal Data”, the final choice of whether vijay@ujvala.com is a personal mail or not is left to the individual himself. If he uses it in a personal context, then in that context it becomes personal email though by default it may not be. On the other hand vijay@gmail.com may be considered by default as personal email but could be declared by the individual as a business email also.

Hence it is un-necessary and improper to discuss whether an email is personal or not based on the domain attached to the email server.  It is for the information gatherer (data fiduciary) to get the indication from the data principal whether a certain email is to be treated as personal email or business email. This should be taken care of during the stage of consent gathering.

Under DPDPA 2023, since the act recognizes that an email can be “Business contact”, the  argument that

@company name is by default a non personal data but could be considered as personal data under the choice of the individual”

and also that a

personal name@gmail.com is by default a personal data though the person has the choice of making it a business contact (non personal data)”

should be considered relevant.

An email address such as designation@company name is also by default a non personal data but perhaps requires an explicit confirmation to be treated as personal data and not be treated entirely on the context.

i.o.w: our view is personalname@company.com can by context be considered as business contact while designation@company.com can be converted to a personal email by explicit consent only and not deemed as per the context.

..Open for debate

Naavi

Posted in Cyber Law | Leave a comment

AI Risk Management under DPDPA 2023

“Artificial Intelligence” is a new term that is sweeping the software world and naturally it has also percolated into the discussions of “Privacy” and DPDPA 2023.

The industry is now presented with a new ISO standard 42001 so that along with ISMS, PIMS, the concept of AIMS has now become the buzzword.

ISO 42001 is a standard that tries to establish the requirements of an AIMS (Artificial Intelligence Management System” that will focus on the system being a “Responsible AI System”. The standard can be used both by the Ai developer as well as the user.

Though the standard should be a good guideline for many companies, it appears that as regards privacy, the AIMS as suggested needs some more tweaking.

AIMS as is envisaged is like PIMS and has to be considered part of the ISMS. In otherwords, though a stand alone certification is envisaged under ISO 42001, an organization cannot avoid ISO27701 and ISO 27001 if it has to adopt ISO 42001 for Privacy. In other words about 40 new controls will get added to 93 controls of ISO 27001 and 49 controls of ISO 27701.

In the DGPSI system FDPPI proposes to consider AIMS, PIMS and ISMS as part of the DGPMS and accommodates all the controls within 50 implementation specifications. In this approach most of the individual controls of the ISO system that makes it bulky and unwieldy get absorbed in the customization of controls through the policies and processes developed in the user environment.

We hope this simplification would be useful to the industry and leave the scope for designing the controls by the implementers as per their specific needs.

Naavi

Posted in Cyber Law | Leave a comment

Implications of US Bill on Cross border data transfer

A Bill has been passed in US to protect the sensitive data of US citizens by restricting cross border data to countries considered as “Adversaries”. To some extent this reflects the thought behind Section 16 of DPDPA 2023 which also has an enabling provision to restrict transfer of personal data collected in India to other countries which may be notified as “Blacklisted Countries”. China, Iran, North Korea, Cuba, Russia and the Maduro government in Venezuela are currently in the list of such adversaries. India is yet to declare the black list of countries under Section 16.

(Refer report in cnbc.com)

The bill bans organizations that profit from selling personal data, known as data brokers, from making data accessible to a foreign adversary country or entities controlled by adversaries.

It also authorizes the Federal Trade Commission to seek civil penalties of more than $50,000 for each violation.

India has to keep on guard that this list of countries donot become too flexible to include any country on which adhoc sanctions are imposed. We may recall that in the early days of Ukraine war many US companies cited the US sanctions to threaten stoppage of IT services in India. This makes the dependence of the country on US companies including companies like Microsoft and Google a long term national security risk.

Naavi

Posted in Cyber Law | Leave a comment

“We Want License to misinform”.. ???

Mr Kunal Kamra the “Stand Up Comedian” has approached the Supreme Court to challenge the Central Government notification for forming Fact Checking Units (FCUs) under the 2023 Information Technology Amendment Rules (IT Amendment rules 2023).

Effectively it is a plea that demands the right to call oneself a “Comedian” and publish false information in the guise of parody or fun.

The earlier attempt to get a stay on the Meity rules was caught in a split judgement at Mumbai High Court and no “Stay” was granted. Now that the elections are in the offing and there is a dire need on the part of some people to spread false news the petitioners have again approached the judicial system with an appeal at the Supreme Court.

On 20th March, the MeitY issued a notification as follows

According to the news report at Bar and Bench ,Mr Kamra pressed for an immediate stay of the rules stating

“While the Impugned Rules (and the IT Rules in general) are facially directed at intermediaries, it is users (and the information created and hosted by them on various platforms) that are the subject of the Impugned Rules … [it] is extremely broad in its sweep, and would operate to muzzle speech against the Central Government,” 

Mr Kamra falsely claimed

“The units would be empowered to direct social media companies to take down any content the government deems fake, false or misleading, but without due process, “

This is false since the FCU only has the power to notify that a certain news is fake and does not have the right to give any direction to the intermediary.

He further contends

“The Impugned Rules do not contemplate the issuance of a notice to the user prior to the identification of information by the FCU, or prior to the takedown by the intermediary … they would inter alia apply to any content hosted by intermediaries that contradict facts, figures or data of the government,”

This is an impossibility since the identity of the user is known only to the platform and the FCU will not know the source except as declared. The Platform as an intermediary is responsible for notifying its users not to do things which are illegal . Spreading false information is illegal per-se.

Mr Kamra says

“Petitioner, who is a political satirist, makes his living out of commenting on what can be stated to be the “business of the Central Government”. Further, most of the Petitioner’s engagement with the public is driven by social media. Any hindrance in accessing the accounts of the Petitioner will be a violation of his fundamental right to practice his profession … will affect his livelihood and right to live with dignity,”

It is to be noted that the so called fundamental right of the petitioner is an infringement on the fundamental right of the citizens who are entitled not to be cheated and mislead by false information.

The petition is set to be heard by the Supreme Court today. Knowing the Supreme Court, we will not be surprised if the Supreme Court grants the stay though Mr Kapil Sibal and Abhishek Manu Singhvi are not representing the petitioner.

Before such a knee jerk reaction, I urge the Supreme Court to read the words and intentions behind the relevant notification and why the petition is a demand to disrupt fair information dissemination during the election time when there is a model code of conduct. Mr Kamra wants to indulge in election campaign without being bothered by the need to maintain truth.

I would like to reproduce the relevant clause 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 under which PIB has been now notified as a Fact Checking Unit.

Due diligence by an intermediary: (Para 3.1)

An intermediary, including [social media intermediary, significant social media intermediary and online gaming intermediary], shall observe the following due diligence while discharging its duties, namely:—

(b) the intermediary shall inform its rules and regulations, privacy policy and user agreement to the user in English or any language specified in the Eighth Schedule to the Constitution in the language of his choice and shall make reasonable efforts to cause the user of its computer resource not to host, display, upload, modify,publish, transmit, store, update or share any information that,—

(v) deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or is identified as fake or false by the fact check unit at the Press Information Bureau of the Ministry of Information and Broadcasting or other agency authorised by the Central Government for fact checking or, in respect of any business of the Central Government, by its department in which such business is transacted under the rules of business made under clause (3) of article 77 of the Constitution];

The petitioners and the Court needs to understand that the guideline is a “due diligence requirement” where the intermediary shall inform through its published policy document that any of the users of the platform shall not “Knowingly” or “Intentionally” host mis information which is “Patently false” or identified as false by the Fact Check unit.

It is to be noted that the requirement does not penalize the person hosting the content nor the platform. It will only deny the platform the protection under Section 79 of ITA 2000 in the event of any Court being moved that the “False” information is causing any damage.

It is a figment of imagination of the petitioner that because of the possibility of a future case being hoisted on the platform and because in such an event the defence of Section 79 is not available to the platform, the platform will today not allow Mr Kamra to host his shows which by his own admission enables him to “makes his living out of commenting on what can be stated to be the “business of the Central Government” which is a high risk business which can disrupt the electoral democracy in the country.

Naavi.org as a representative of the citizens of India  contend that the  petition is a request for license to indulge in mischief and disturb the proper conduct of the elections in India. Allowing  mis information without accountability of the platform will not only enable Mr Kamra to misuse the social media but also encourage use of deepfake and other technologies to mis-inform the public at a time there will be no remedy available to prevent damage to the electoral process.

Hence we hope that the Supreme Court does not allow this petition particularly at this point of time when the nation is into an election. Also the Court should consider the views of the election commission in this regard and not issue any stay based on the petition which will be unjustified and contribute to the vitiation of the electoral atmosphere.

Naavi

Posted in Cyber Law | 1 Comment

The EU Act on Artificial Intelligence

After a long deliberation, EU Parliament has adopted the EU-AI Act setting in motion a GDPR like moment where similar laws may be considered by other countries. India is committed to revise ITA 2000 and replace it with a new Act which may happen in 2024-25 after the next elections and it should include special provisions for regulating AI .

Presently Indian law addressing AI is through ITA 2000 and DPDPA 2023. ITA 2000 assigns accountability for AI to the AI developers who may transfer it to the licensees of the algorithms developed. (Section 11 of ITA 2000). Where the AI model uses personal data for its learning, DPDPA 2023 may apply and consider the algorithm user as a “Data Fiduciary” responsible for consent and accuracy of processing.

An advisory issued recently by the MeitY has suggested that platforms which permit hosting of AI derivatives (eg Videos) need to take permission of MeitY.

DGPSI, which is a framework for implementation of DPDPA 2023 suggests AI algorithm vendor to be considered as a “Data Processor/”Joint Data Fiduciary” and conduct a DPIA before its adoption.

In the light of the above, we can quickly understand the approach of EU AI act and draw some thoughts from there for implementing “Due Diligence” while using AI in data processing.

The approach of EU-AI act is to define AI and classify the AI algorithms on the basis of risks and provide a graded regulatory control starting from no control to banning.

The Act defines AI as follows:

A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments

The main distinguishing feature of AI is that all softwares have coded instructions which get executed automatically in sequence. AI has one special instruction that involves an instruction to modify the code on certain conditionalities so that it becomes self correcting. This aspect has been captured in the definition.

However, the more critical aspect of “Drawing inference from inputs and generating outputs” is when the input is a visual or a sound that the AI can match with its machine learning process and identify with a specific character and respond based on the input. For example, if there is a sound, AI may infer, this is the sound of naavi and respond. This is “Voice Recognition” and involves referring to the earlier data base of voices that the AI can remember or refer to. Similarly when it sees a visual of a person with a raised hand holding a weapon and moving nearer, it may sense an “Attack” based again on its earlier machine learning process.

At the end of the day, even these responses are a re-play of an earlier input and hence the hands of the developer can be identified with the response. In real life, an action of a minor is ascribed to the Parent as long as the person is a minor. After attaining majority the responsibility shifts to the erstwhile minor.

Similarly the AI has to be recognized with reference to its “Maturity” and identified as a “Emancipated AI” or a “Dependent AI”.

This difference is not captured by EU-AI Act.

The EU Act only identifies the type of decisions that an AI generates and tries to identify “Risks” and incorporate it in its classification tag. This is like identifying that a knife in the hands of a child is a risk but a knife in the hands of an adult as not a risk since the maturity of the algorithm is not the consideration but the identified risk is. Whether this is fine at the current stage or could have been improved is a matter of debate.

The five suggested classifications are

  1. Unacceptable Risk
  2. High Risk
  3. Low Risk
  4. Generative AI
  5. No Risk

The unacceptable Risk AIs are banned and includes

  • Behavioral manipulation or deceptive techniques to get people to do things they would otherwise not
  • Targeting people due to things like age or disability to change their behavior and/or exploit them
  • Biometric categorization systems, to try to classify people according to highly sensitive traits
  • Personality characteristic assessments leading to social scoring or differential treatment
  • “Real-time” biometric identification for law enforcement outside of a select set of use cases (targeted search for missing or abducted persons, imminent threat to life or safety/terrorism, or prosecution of a specific crime)
  • Predictive policing (predicting that people are going to commit crime in the future)
  • Broad facial recognition/biometric scanning or data scraping
  • Emotion inferring systems in education or work without a medical or safety purpose

This categorization seriously affect the use of AI in policing. This is like banning the knife whether it is used by a child or an adult.

On the other hand a “Purpose Based” classification such as “Use of predictive policing” is permitted under certain controlled conditions but not otherwise could have been an approach to be considered. We know that EU does not trust the Governments and hence it was natural for them to take this stand. India cannot take such a stand.

This type of approach says “Privacy is the birth right of Criminals”. “Security is not the right of honest Citizens”. It is my view that this approach should be unacceptable in India.

However knowing the behaviour of our Courts we can predict that if a law is introduced in India that will uphold use of AI for security, it will be challenged in the Court.

EU Act concedes that use of realtime biometric identification for law enforcement may be permitted in certain instances such as targeted search of missing missing or abducted persons or cases of crime and terrorism. Fortunately the current DPDPA 2023 does recognize “Instrumentalities of State” that may be exempted from Data Fiduciary responsibilities in certain circumstances.

Behavioural manipulation, profiling people on the basis of biometric categorization, are banned under the EU Act.

The second category of AI s namely the High Risk include AI in medical devices, Vehicles, Policing and emotion recognition systems.

It is noted that emotional inferring is “Banned” under the act but emotion recognition systems are classified as high-risk and not unacceptable risk. This could place a doubt on whether humanoid robots under development which include emotional expression capture and response would be one of the non permissive uses. Similarly AI in policing is in high risk category but “Broad facial recognition” or “predictive policing involving profiling of people as to whether they are likely to commit crimes in future” is in the banned list.

This overlapping of “Unacceptable and High Risks” could lead to confusion as we go on. The overlapping suggests that we should consider the classification more on the purpose of use rather than the type of AI. Requires more debate to understand the compliance obligations arising out of the classification of AI.

The use of AI in deepfake situations is considered “No Risk” and is another area on which India needs to take a different stand.

The summary of observations is that the

1.”Banning” of certain AI systems may be disrupting innovation

2.Risk classification is unclear and overlapping.

3.Maturity of Machine Learning process is not considered for classification.

4.In classification there is a mix up of purpose of use and the nature of the  algorithm which needs clarity.

There is no doubt that legislation of this type is complex and credit is due for attempting it. India should consider improving upon it.

Reference Articles:

Clear View

Compliance Checker tool

Posted in Cyber Law | Leave a comment