Tax Professionals in the Age of Personal Data Protection: Operational and Legal Impacts of DPDP Act

Article Contributed by: Mr. M. G. Kodandaram

Introduction

The Digital Personal Data Protection Act, 2023[i] (DPDP Act), read with the Digital Personal Data Protection Rules, 2025[ii](DPDP Rules), marks a structural shift in India’s regulatory approach to personal data in digital form. While the law applies horizontally across sectors, its impact on tax professionals, engaged in both direct and indirect tax compliance, advisory, litigation, and technology-driven reporting, is particularly profound.

Tax practice in India is inherently data-intensive. From personal data like PAN, Aadhaar, bank details, income tax returns, salary structures, invoices, GST returns, e-way bills, shipping documents, to digital audit trails and electronic evidence, tax professionals routinely process vast volumes of sensitive personal and financial data. The DPDP framework introduces a rights-centric and accountability-driven regime that fundamentally reshapes how such data may be collected, stored, shared, retained, and erased.

The DPDP Act and the accompanying Rules do not operate merely as a privacy law; they establish a comprehensive compliance architecture that directly interfaces with Income Tax, GST, Customs, Corporate Tax advisory, audits, representations before statutory authorities, and tax technology platforms. This article examines how the DPDP regime repositions tax professionals as regulated Data Fiduciaries and Data Processors, thereby expanding their legal exposure beyond traditional tax statutes to include significant data protection liabilities, including penalties running into hundreds of crores.

Digitalisation of Tax Administration

Over the last two decades, India’s tax administration has undergone a far-reaching process of digitalisation that has fundamentally reconfigured the manner in which taxes are assessed, administered, monitored, and enforced across statutes.

Under the Income-tax Act, the shift to mandatory electronic filing of returns, statements, audit reports, and transfer pricing documentation has been accompanied by the introduction of faceless assessments, faceless appeals, and centralised processing through CPC and NFAC mechanisms. These reforms have replaced physical interfaces with algorithm-driven workflows, electronic notices, and digital evidence submissions, making the tax process heavily dependent on continuous flows of personal, financial, and transactional data. The increasing use of data analytics, risk-based scrutiny selection, and AI-assisted profiling further underscores the centrality of digital data in contemporary income-tax administration.

The Goods and Services Tax (GST) regime has accelerated this transformation by embedding technology at the core of compliance and enforcement. The GST Network (GSTN) operates as a unified digital backbone for registration, return filing, tax payments, refunds, audits, and adjudication. The introduction of e-invoicing, e-way bills, automated matching of input tax credit, and system-generated notices has created a real-time compliance environment where vast volumes of supplier-customer data, logistics information, banking details, and identity records are continuously processed. GST compliance is no longer episodic but perpetual, driven by integrated digital systems that require tax professionals to manage, reconcile, and interpret data streams across multiple platforms.

A similar course is evident in customs and indirect tax administration, where automation through ICEGATE, faceless assessment groups, electronic bills of entry, shipping bills, risk management systems, and integration with port, shipping, advance electronic filing of arrival and departure manifests, and logistics platforms has transformed trade facilitation and enforcement. Importers and exporters are now subject to digital documentation requirements involving invoices, packing lists, valuation data, origin certificates, and logistics information, all of which are processed, shared, and retained electronically across interconnected government systems. Customs compliance has thus become deeply intertwined with data governance, cybersecurity, and information management.

Collectively, these developments have blurred the traditional doctrinal boundaries between tax law and information law. Tax professionals today do not merely advise on statutory interpretation or compliance strategy; they operate as custodians, processors, and managers of extensive digital repositories containing sensitive personal, financial, and commercial data. In this transformed regulatory environment, the DPDP Act, 2023 assumes critical importance, not as a peripheral privacy statute, but as a central governing framework that directly regulates the everyday conduct of tax practice. The DPDP regime overlays the digital tax ecosystem with enforceable data protection obligations, redefining professional responsibility in an era where tax administration is inseparable from technology-driven data governance.

Tax Professionals as Data Fiduciaries and Data Processors

The DPDP Act introduces a fundamental reclassification of roles that has direct and far-reaching implications for tax professionals. Section 2(i) and (k) of the Act draws a clear distinction between a Data Fiduciary, defined as any person who determines the purpose and means of processing personal data, and a Data Processor, who processes such data on behalf of a Data Fiduciary. When applied to tax practice, this distinction assumes practical and legal significance.

A chartered accountant firm, tax consultancy, GST practitioner, or law organisation that decides what client data is collected, the manner in which it is stored, the purposes for which it is analysed, and the circumstances in which it is shared with tax authorities or other stakeholders, clearly exercises decisive control over both the purpose and the means of processing. Such professionals, therefore, function squarely as Data Fiduciaries under the DPDP framework. Conversely, cloud-based accounting software providers, GST return-filing utilities, GST Suvidha providers (GSPs) payroll processors, document management systems, and outsourced IT service providers typically operate as Data Processors, handling personal data strictly on behalf of and under the instructions of the tax professional or firm.

The DPDP Act places the ultimate responsibility for compliance firmly on the Data Fiduciary, irrespective of whether the actual processing activity is outsourced to third-party technology vendors. For tax professionals, this represents a decisive shift from the long-held assumption that risks associated with client data are borne primarily by software providers. Under the DPDP regime, accountability cannot be delegated; tax professionals remain legally answerable for the security, lawful processing, and protection of client data throughout the entire data lifecycle.

Consent and Notice: Rethinking Client Engagement

One of the most transformative features of the DPDP regime is its fundamental re-engineering of the concept of consent, which directly impacts the manner in which tax professionals engage with their clients. Traditionally, personal data in tax practice has been collected through engagement letters, email correspondence, online portals, authorisation forms, and, at times, informal exchanges, with limited emphasis on granular disclosure of data usage. The DPDP Act alters this position by elevating informed consent to a core legal foundation for lawful data processing.

Consent is no longer a broad or implied understanding embedded within professional engagement; it must be free, specific, informed, unconditional, and unambiguous, and must be evidenced through a clear affirmative action by the data principal. Crucially, such consent must be preceded by a clear and standalone notice that transparently explains the nature of the personal data being collected, the precise purposes for which it will be processed, the manner in which it may be shared, and the rights available to the individual, including the right to withdraw consent. Equally significant is the requirement that withdrawal of consent must be as easy and accessible as the process by which consent was originally given, thereby preventing procedural barriers or indirect compulsion.

These requirements have direct and immediate implications for client onboarding forms, engagement letters, GST return authorisations, powers of attorney, and vakalatnama-based representations before tax authorities. Tax professionals are now required to explicitly articulate and disclose the exact purposes of data processing, whether it is for statutory compliance, advisory services, litigation support, representation before authorities, statutory record retention, or regulatory disclosures, thereby transforming client engagement documents into instruments of transparency and legal accountability under the DPDP framework.

Legitimate Uses Without Consent: Relief for Statutory Compliance

The DPDP Act adopts a pragmatic approach by recognising that, in regulated professions such as taxation, certain forms of data processing are not merely incidental but legally unavoidable. Section 7 of the Act therefore carves out a carefully structured exemption permitting the processing of personal data without obtaining express consent where such processing is undertaken for “legitimate uses.” These include compliance with any law in force in India, adherence to judicial or regulatory orders, responding to legal claims, and the discharge of State functions, including taxation. This statutory recognition is of critical importance to tax professionals, whose day-to-day activities are intrinsically linked to mandatory disclosures and statutory compliances.

In the context of GST and other tax laws, activities such as filing returns, furnishing statements, responding to notices and summons, producing books of account and electronic records, participating in audits, investigations, and adjudication proceedings, and representing clients before tax authorities are not discretionary acts but legal obligations imposed by statute. The DPDP framework acknowledges this reality and ensures that such essential functions are not rendered unworkable by rigid consent requirements. Consequently, tax professionals are not required to obtain fresh or repeated consent for each instance of data processing that is directly necessitated by statutory compliance or regulatory compulsion.

At the same time, the exemption under Section 7 is not open-ended. It is expressly purpose-limited and must be construed narrowly to align with the specific legal obligation that justifies the processing. While personal data may be processed to the extent necessary for compliance with tax laws or judicial directives, the same data cannot be retained indefinitely, repurposed for unrelated commercial analysis, internal profiling, or shared with third parties beyond what the law mandates. Any secondary use of data must independently satisfy the requirements of consent or another recognised lawful basis under the Act. For tax professionals, this imposes a nuanced obligation to distinguish between statutorily compelled processing and discretionary data use, ensuring that the former remains confined to its legal purpose while the latter is subjected to the full rigour of the DPDP consent and accountability framework.

Data Retention vs Tax Record Retention

A significant area of friction under the DPDP framework arises from the intersection of data minimisation and erasure principles with the entrenched record-retention obligations embedded in tax statutes. Indian tax laws operate on the premise of extended and, in some cases, open-ended retention of records to enable assessment, reassessment, audit, investigation, and appellate scrutiny. Income-tax legislation routinely requires preservation of books of account, supporting documents, and electronic records for periods ranging from six to ten years, with longer retention triggered in cases involving reassessment or search proceedings. Similarly, the GST regime mandates maintenance of records for a minimum of six years from the due date of the annual return, while transfer pricing documentation may be required to be preserved for up to ten years given the extended limitation periods and international information-exchange obligations. Where litigation is pending, records are often retained indefinitely as a matter of necessity, professional prudence, and judicial expectation.

Under the DPDP regime, personal data is to be retained only for as long as it is necessary to fulfil the specific purpose for which it was collected, subject to a universal minimum retention period of one year, and based on the principles of purpose limitation and data minimisation. Beyond this baseline, continued retention is permitted only where it is required by law. This creates an inherent legal anxiety for tax professionals, who must reconcile long statutory retention mandates with the DPDP’s expectation of timely erasure once the purpose of processing is exhausted. The Act does not override tax laws, but it does require that the justification for prolonged retention be legally defensible and demonstrably linked to a statutory obligation.

In practical terms, this places a new compliance burden on tax professionals and firms to actively map and document the legal basis for retaining client data beyond the DPDP’s general erasure expectations. Each category of record, like returns, working papers, correspondence, litigation files, and electronic data etc., must be aligned with the specific tax provision, limitation period, or judicial proceeding that necessitates its retention. Absent such documentation, continued storage may be vulnerable to challenge as unlawful or excessive under the DPDP framework. The failure to undertake this mapping exercise exposes firms not only to regulatory scrutiny but also to allegations of unlawful data hoarding, thereby transforming record retention from a passive archival function into an active compliance obligation at the intersection of data protection and tax law.

Data Breach Obligations

The DPDP regime introduces a fundamentally new layer of compliance risk for tax professionals by transforming data security incidents from internal risk-management concerns into statutorily regulated events. Tax firms routinely handle some of the most sensitive categories of personal and financial data in the regulatory ecosystem, and the concentration of such high-value data makes tax professionals particularly vulnerable to cyber incidents, phishing attacks, ransomware, insider leaks, and inadvertent disclosures arising from cloud-based collaboration and outsourced IT systems.

Under the Rules, any personal data breach triggers immediate and non-discretionary obligations. The affected data principals must be informed without delay, ensuring transparency and enabling them to take protective measures. Simultaneously, the breach must be intimated to the Data Protection Board of India, followed by the submission of a detailed report within seventy-two hours setting out the nature of the breach, the categories of data compromised, the likely consequences, and the remedial steps undertaken. These obligations apply irrespective of whether the breach occurred within the firm’s own systems or at the level of a third-party data processor, reinforcing the principle that ultimate accountability rests with the Data Fiduciary.

This represents a decisive departure from traditional professional practice. Historically, data breaches in tax firms were treated primarily as reputational crises, managed through client communication, internal remediation, and, at most, contractual liability. The DPDP framework reclassifies such incidents as potential statutory violations, attracting severe financial consequences. With penalties that may extend up to ₹250 crore, data breach compliance now assumes the same seriousness as substantive tax defaults or professional misconduct. For tax firms, this necessitates the institutionalisation of incident response protocols, cybersecurity audits, breach notification workflows, and contractual risk allocation with IT vendors. Data protection compliance is no longer ancillary to tax practice; it has become an integral component of professional risk management in the digital tax ecosystem.

GST, Income Tax and the Compliance Technology Stack

Contemporary GST and income-tax practice operate on a dense and interlinked compliance technology stack, without which statutory obligations are practically impossible to discharge. Cloud-based accounting systems, GST return-filing utilities, e-invoice generation and reconciliation tools, customs and ICEGATE interfaces, and increasingly AI-driven analytics for risk assessment and advisory have become integral to day-to-day professional functioning. These platforms process vast volumes of personal and financial data on a continuous basis, often across multiple jurisdictions and servers, fundamentally reshaping how tax compliance is delivered.

The DPDP framework directly intervenes in this technology-driven ecosystem by re-characterising software vendors and platform providers as Data Processors and imposing enhanced accountability on tax professionals as Data Fiduciaries. Contracts with accounting software providers, GST utilities, payroll platforms, and analytics vendors can no longer remain purely commercial or functional. They must now incorporate robust data protection clauses covering purpose limitation, confidentiality, security safeguards, breach reporting timelines, sub-processing restrictions, and audit rights. Informal or standard “click-wrap” arrangements that lack DPDP-compliant safeguards expose tax firms to significant regulatory risk.

Operationally, the DPDP Rules mandate the maintenance of access logs and audit trails for personal data processing, with a minimum retention period of one year. This requirement assumes particular significance in tax practice, where multiple staff members, partners, and external consultants may access the same client datasets through shared platforms. Firms must therefore implement granular access controls, role-based permissions, and logging mechanisms capable of demonstrating who accessed what data, when, and for what purpose. These measures are not merely best practices but evidentiary safeguards in the event of regulatory scrutiny or breach investigations.

Children’s Data, Employees, and Payroll Processing

The DPDP Act adopts a heightened standard of protection in relation to children’s personal data, a feature that has direct implications for tax professionals engaged in payroll processing, employee taxation, and TDS compliance. Under Section 2(f) of the Act, a “child” is defined as an individual who has not completed eighteen years of age, and “children’s data” refers to personal data relating to such individuals.

In cases where personal data relating to minors is processed, the DPDP regime mandates verifiable consent of the parent or lawful guardian, subject to limited statutory exemptions. This requirement assumes relevance where tax professionals process information about employees’ minor children for computing exemptions, deductions, or benefits that are linked to payroll structuring or statutory disclosures. While the Act recognises certain exemptions for employment-related data processing, these exemptions are not absolute and must be interpreted in light of the purpose and necessity of the processing activity. Data collected strictly for compliance with tax laws or labour regulations may fall within the legitimate use framework, but any processing beyond this narrow purpose attracts enhanced scrutiny.

Consent Managers and the Future of Tax Compliance Platforms

The DPDP Act introduces the concept of Consent Managers as a distinct institutional mechanism designed to operationalise consent in a structured, transparent, and technology-driven manner. Under Section 2 of the Act, a Consent Manager is defined as a person registered with the Data Protection Board of India(DPBI) who acts as a single point of contact to enable a Data Principal to give, manage, review, and withdraw consent through an accessible, interoperable, and secure platform. This definition marks a significant shift from informal, document-based consent practices to an auditable and standardised consent architecture governed by regulatory oversight.

In the context of taxation, the emergence of Consent Managers has far-reaching implications for the future of tax administration and compliance technology. Indian tax authorities are increasingly dependent on consent-based data flows across multiple regulatory silos, including banking systems, GSTN, income-tax databases, customs platforms, and corporate filings under the MCA framework. As data-driven governance deepens, Consent Managers may evolve into regulated intermediaries facilitating lawful and traceable data sharing between taxpayers, professionals, financial institutions, and government platforms. This could fundamentally alter how authorisations, mandates, and data access permissions are granted and monitored in tax compliance ecosystems.

For tax professionals, particularly those advising fintech, regtech, and tax technology platforms, this development opens up an entirely new compliance and advisory frontier. Consent Managers could become embedded within return-filing portals, GST reconciliation tools, payroll platforms, and AI-driven compliance dashboards, enabling real-time, revocable, and purpose-specific access to taxpayer data. Such an architecture would not only enhance data protection compliance but also reduce disputes over unauthorised access, over-collection, and prolonged data retention. Understanding the legal contours, operational standards, and liability framework applicable to Consent Managers will therefore become essential for professionals engaged in technology-enabled tax practice.

Beyond compliance, the Consent Manager framework also presents a potential new business opportunity. Tax technology providers, professional firms, and regulated intermediaries may explore the development of DPDP-compliant consent management solutions tailored to tax workflows, including GST filings, income-tax representations, audit authorisations, and litigation support. By positioning themselves at the intersection of data protection law and tax administration, such platforms could offer value-added services that combine regulatory assurance with operational efficiency.

Significant Data Fiduciaries: Large Tax Platforms at Risk

The DPDP framework introduces an enhanced compliance regime for entities classified as Significant Data Fiduciaries (SDFs), a designation that is likely to encompass large tax filing platforms, payroll aggregators, and GST compliance service providers operating at scale. Owing to the volume and sensitivity of personal and financial data processed, the Central Government may notify such tax technology entities as SDFs, thereby subjecting them to heightened statutory obligations. Once designated, these platforms are required to appoint a Data Protection Officer (DPO) based in India, undertake periodic and annual data protection audits, conduct Data Protection Impact Assessments (DPIAs) for high-risk processing activities, and ensure transparency and accountability in algorithmic decision-making systems that influence compliance outcomes or risk profiling.

For tax technology companies, this marks a decisive shift in regulatory expectations. DPDP compliance is no longer a peripheral governance concern but becomes structurally intertwined with their core business model, much like adherence to GST, income-tax, and regulatory reporting obligations. Failures in data governance, whether in audit readiness, algorithmic opacity, or risk assessment processes, can now attract penalties comparable to substantive tax defaults.

Penalties: A New Dimension of Professional Risk

Unlike disciplinary proceedings before professional bodies or penalties under tax statutes, DPDP violations attract civil penalties of an unprecedented scale. Failure to implement reasonable security safeguards for personal data may invite penalties extending up to ₹250 crore, while failure to notify data breaches to affected individuals and the Data Protection Board can attract penalties of up to ₹200 crore. Even general non-compliance with the Act and Rules carries exposure of up to ₹50 crore, underscoring the seriousness with which data governance failures are viewed under the new regime.

For tax professionals and firms, the DPDP framework compels a fundamental reassessment of professional risk management. Traditional professional indemnity insurance, internal controls, and governance structures designed around tax advisory risks may no longer be adequate. Firms must re-evaluate coverage limits, redesign internal compliance frameworks, and embed data protection governance at the same level of seriousness as statutory tax compliance, recognising that DPDP exposure has become an inseparable component of modern tax practice.

Professional Ethics, Confidentiality and DPDP

The DPDP Act does not displace the long-standing ethical and confidentiality obligations that govern tax professionals under their respective professional statutes; rather, it reinforces and formalises them within a statutory data protection framework. Duties of confidentiality imposed under the Chartered Accountants Act, the Advocates Act, and the Company Secretaries Act have historically functioned as core ethical norms, enforced primarily through self-regulatory disciplinary mechanisms. These obligations are deeply embedded in professional culture, premised on trust, fiduciary responsibility, and the inviolability of client information. The DPDP regime builds upon this foundation, recognising the sanctity of confidential information while extending its protection into the digital and technology-mediated domain of modern professional practice.

However, the ethical duties that were once largely internal to the profession are now subject to oversight by an external statutory authority, namely, the Data Protection Board of India. Confidentiality is no longer merely a question of professional propriety or disciplinary compliance; it is recast as a legally enforceable obligation, breach of which attracts significant civil penalties.

For tax professionals, this convergence of professional ethics and data protection law has profound implications. A lapse that might earlier have resulted in censure, suspension, or reputational harm can now trigger parallel consequences under the DPDP framework, including regulatory proceedings and substantial financial exposure. The protection of client data thus ceases to be an internal matter of professional honour alone and becomes a matter of public law compliance. In this sense, the DPDP Act elevates confidentiality from an ethical ideal to a legally policed standard, demanding that tax professionals institutionalise data protection as an integral element of professional conduct, governance, and accountability in an increasingly digitised tax ecosystem.

From Tax Advisors to Data Trustees

As the DPDP Act and Rules become fully enforceable on 14 May 2027, tax professionals are confronted with a transformative mandate to evolve from traditional fiscal advisors into conscientious trustees of digital personal data. This evolution requires a structured, phased approach to compliance, encompassing comprehensive data mapping and classification, the redesign of engagement documentation to ensure privacy-compliant consent, rigorous review of contracts with data processors, robust incident response planning, and systematic staff training. Equally critical is the establishment of transparent client communication frameworks, ensuring that Data Principals are informed of their rights and the mechanisms through which those rights are exercised.

Adopting this proactive approach is no longer optional. Early movers in this space will not merely mitigate the risk of substantial DPDP penalties; they will accrue a competitive advantage in a tax ecosystem increasingly defined by trust, transparency, and regulatory accountability.

In an era marked by faceless assessments, AI-driven risk profiling, and data-intensive enforcement, the principles of data protection are inseparable from professional competence. The road to 2027 challenges professionals to rethink their role fundamentally: those who embrace the responsibilities of data trusteeship will define the future of tax practice, setting the standard for ethical, secure, and digitally resilient advisory services in India’s increasingly automated fiscal landscape.

  Mr. M. G. Kodandaram, IRS.
Advocate and Consultant

[i] https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
[ii] https://www.meity.gov.in/static/uploads/2025/11/53450e6e5dc0bfa85ebd78686cadad39.pdf

 

Posted in Privacy | Leave a comment

Fundamental Rights Impact Assessment (FRIA)

We refer to the DGPSI-AI framework which is an extension of DGPSI framework for DPDPA compliance. DGPSI-AI covers the requirements of a AI deployer who is a data fiduciary and has obligations  under the DPDPA.

This framework which is a pioneering effort in India on how voluntary AI regulation could be brought in through the framework though we may not have a full fledged law on AI in place. In EU, the EUAI Act was a comprehensive legislation to regulate AI usage. Naturally it did impact the data protection scenario also.

It was one of the requirements of the EU-AI act (Article 27) that before deploying high Risk AI systems deployers shall conduct an “Impact Assessment”. This “Impact Assessment” was directed towards fundamental rights that the use  of such systems may impact. This was called the FRIA or the Fundamental Rights Impact Assessment to be conducted on the first use of the system. This framework may also impact the implementation of DORA. (EU Digital Operations Resilience Act).

Now the European Center for Not for Profit Law (ECNL) and Danish Institute for Human Rights (DIHR) have released a template for FRIA which is an interesting document to study. This may be compared with the DGPSI AI which has six principles and nine implementation specifications for AI deployers.

A Copy of the report is available here

In the DGPSI AI, one of the principles is that

“The responsibility of the AI deployer as a “Fiduciary” shall ensure all measures to safeguard the society from any adverse effect arising out of the use of the AI.”

Further, the implementation specifications include

The deployer of an AI software in the capacity of a Data Fiduciary shall document a  Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on  whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI  process.

The DPIA shall be augmented with periodical external Data Auditor’s evaluation at least once a year.

The FRIA, suggested by the ECNL is considered similar to a DPIA and highlights  the  requirement to document the impact on the fundamental rights. It could be  a guideline for conducting DPIAs under  GDPR.

While DGPSI-AI focusses more on the compliance to DPDPA 2023 while FRIA tries to focus directly o the “Fundamental Rights”.

A template released in this context lists 51 fundamental Rights over which the assessment.

Following questions are suggested to be explored across this canvas.

1.How can the deployment of the AI system negatively affect people?
2.What drives these negative outcomes?
3.Which people/groups are negatively affected in this scenario?
4.What is the fundamental right negatively impacted?
5.What is the extent of the interference with the respective fundamental right?
6.How would you assess the likelihood of this interference? Explain your answers)
7.What is the geographic scope of the negative impact ?
8.Within the group identified, how many people will be negatively impacted?
9..Amongst this group, are there people in situation of vulnerability such as children, elderly, people with disabilities, low-income households, racial/ethnic minorities? (Explain your answers)
10. What is the gravity of the harm that affected people might experience?
11. What is the irreversibility of the harm
12.What is the level of prioritisation for this impact?
13.List the actions already in place that can prevent and mitigate the negative impact
14. List all the additional actions that should be taken to prevent and mitigate the negative impacts.

The template suggests allocation of scores for each fundamental right and their aggregation.

While we appreciate the comprehensive nature of the guideline, there  is one fundamental principle that the regulators and advisors should follow. It is that any Compliance has to be designed to be as simple as possible. Making it complicated will only add a disproportionate cost or rejection.

The DGPSI-AI has tried to avoid this trap and leave it to the auditor to decide what risks are to be factored. Again by aligning the DGPSI framework to the “Compliance of DPDPA”, it has  been ensured that business entities need not be experts on the interpretation of the constitution on fundamental rights but focus on the law which has been passed by the Parliament.

Making an assessment of the impact of an  AI on a canvas of 51 fundamental rights is a humongous task and perhaps not practical. India has only 6 fundamental rights and out of which “Right to Privacy” which is carved out of “Right to Life and Liberty” is relevant for DPDPA Compliance and is included in DGPSI.

In the DGPSI-GDPR framework, there are the  following four implementation specifications to address the issue of Ai deployment.

  1. Organization shall establish an appropriate policy to identify the AI risks with its own processing and processing with data processors, and obtain appropriate assurances
  2. Organization shall establish an appropriate  policy to ensure that there is an accountable human handler  in the organization ,in the AI vendor organization and AI deploying  data processor organization
  3. Organization shall adopt AI deployment based on specific documented requirement.
  4. All AI usage shall be supported by an appropriate guardrails to mitigate the risks.

While we accept that the four specifications of DGPSI-GDPR addressing AI requirements may not be as comprehensive as  the ECNL  framework, we hope it is more practical.

I would be happy to receive the views and comments of the experts.

Naavi

 

Posted in Privacy | Leave a comment

Business Model Maturity Index Method of Data Valuation

In continuing our discussion on Data Valuation methods, we have come across an interesting discussion called “Business Model Maturity Index method”.

This method tries to assess the organization’s Capability focussing on data and analytics by progressing through defined levels such as “Chaotic”, “Awareness” “Defined” “Managed”, “optimized” etc.

We have discussed a version of this thought in expanding the “Theory of Data”  and more particularly when discussing the  (life cycle of data) (Also refer here).

Mr Bill Schmarzo, an author and educator in the domain of Big data has developed the concept of Big Data Business Model maturity index to help organizations measure how effectively they were at leveraging data and analytics to power their business models.

This approach suggests growth from the Business Monitoring phase  to gaining Business insights,  and business optimization, Business monetization, data monetization leading to business metamorphosis to a data  driven  organization.

These academic thoughts are evolving world over and India needs to catch  up with  the developments. Perhaps FDPPI will lead this academic exercise. If any academic institution join hands, perhaps we can make further progress.

Naavi

Also Refer here

 

 

Posted in Privacy | Leave a comment

Data Valuation Methods-3

Continued

We have discussed the different data valuation methods which were discussed by the Australian economists Moody and Walsh in our previous post.

The first step in data valuation in an organization starts with creation of the Data Inventory. Then they need to be classified. DGPSI discusses extensively how a centralized personal data inventory can be created and the classifications that are required for DPDPA Compliance.

For the purpose of Data Valuation, we still need the inventory of data but the classification system has to follow a different  pattern.

Firstly Data Valuation for an organization needs to cover both the Non personal Data and the Personal Data. Currently Non  Personal Data is classified on the basis of Information Security requirements and Personal Data on the basis of the data protection laws.

The “Value perspective” for both Non personal data and personal data is different.

As regards the personal data, DVSI outlines the method of adding  depth, age and sensitivity weightages to the intrinsic value of the personal data set.

In the case of Non personal data there are data related to finance, marketing, administration, production and Governance. Some marketing data such as market reports and surveys may be bought or subscribed. There are specific costs associated with  them which can be traced.

The weighted value additions are however a matter of expert views based on the knowledge of how data impacts the business.

In the case of public sector organizations, or NGOs, not all data usage can eb traced to profit making. There is a “Social Cost Benefit” associated with the activities of the organization. Capturing the creation of such value to the society as the value of the output data is a challenge.

The cost based method therefore  becomes the method which can be used by most organizations since there is no need for social cost benefit. However, “Revaluation” based costing can be done in some cases to correct the cost based estimates.

Once the method for finding out the intrinsic cost is finalized with or without the refinements such as adding weights for different factors, the organization can value the data assets as on the date of the balance sheet and also monitor its movement at periodical intervals.

As a framework however, there would be a need for a Governance structure with a “Data Valuation Committee” supported by a “Data Valuation Officer” taking the responsibility for making appropriate policies and implementing them.

When DGPSI is used as the framework, the Data Valuation and Data Monetization are to be supported by  appropriate policies along with the clearance of the consent, legitimate use or exemption.

While taking a consent, for Data Monetization, it is  considered necessary for the organization to take a special consent which may be properly authenticated and preferably witnessed by a third party with a declaration such as

“The data principal confirms that he/she understands that this consent covers monetization of the personal data and has provided a free consent for the same. The consent is witnessed by …………………, who confirms that the nature of this consent has been explained to  the data principal and he has understood it before agreeing to provide the consent.”

This is the beginning of  the data valuation era in India and the suggestions contained here in will be refined as required.

At present, Naavi’s Ujvala Consultants Pvt Ltd is open to accepting Data Valuation projects  and interested entities may contact Naavi for  details.

Naavi

Posted in Privacy | Leave a comment

Data Valuation Methods…2

Continued

In developing a DGPSI-Data Valuation framework in support of DGPSI-Full version, let us now explore some globally prevailing thoughts on Data Valuation.

In 1999, two Australian researchers Daniel L Moody and Peter Walsh published a paper  “Measuring the value of Information An asset Valuation Approach”. In this paper they discussed what are referred to as the “Seven Laws of Data Valuation”. This concept covered the following nature of data that could contribute to its valuation

  1. Data Is infinitely shareable
  2. Value increases with Use
  3. Information is perishable
  4. Value increases with Accuracy
  5. Value increases with synergy
  6. Value of data increases with more data upto an overload  point but  may trip later.
  7. Information is self generating.

Moody and Walsh also described how Data Can be “A raw material” which when used with Software and hardware as plant and equipment produces an information as the end product.

In this concept, Software itself is an asset  which like a catalyst works with input data to produce an output data while it remains unchanged. The self learning AI algorithm is a distinct category of data which cannibalizes the output data to transform itself as it evolves as a tool.

“Data” as software represents  an asset like a “Fixed Asset” in the user organization while it could be a “Finished Data Product” in the software company.

When “Data” is used in 3D printing, data as input (3D scan of the object) combines with data as software for 3D printing and physical raw material (The component that is printed as an object) to result in the Physical Object as  an end  project. The software remains as a re-usable element  for the next production of a different product. The data used for the design also remains as a by product which can be re-used if a similar object has to be printed once again.

There is another kind of Data which is also used as a “Fixed Asset”. It is the “Content” that is used to generate value by subscription or carrying advertisements. The advertisement  revenue or the subscription revenue depends on how good is the content and how it is itself promoted. An advertising “of the content” advertises “the content” where other “advertisements” are embedded. The valuation of such assets need to take this into account. Such assets are also amenable for depreciation and sensitive to time and accuracy of the content.

Valuing such content needs to take into account these complex web of revenue generation possibilities with time value. They are better suited for the Discounted Cash Flow (DCF) or Net Present Value (NPV) method of valuation than the Cost of Acquisition method.

Hence some data assets are more amenable to Cost of Acquisition (COA) method while some are more amenable to DCF/NPV method.  Some may require frequent adjustments of time value of content to the extent that a “Revaluation Method” is more  acceptable.

Naavi has earlier propounded a “Theory of Data” from which  many of the above 7 laws can be implied. Naavi’s theory had been built on three hypothesis namely “Data Is in the beholder’s eyes”, “Data has a reversible life cycle” and “Changes in the Value of data  during the life cycle belongs to different  contributors).

In the Puttawamy Judgement, Justice Chandrachud remarked that “Data is non-rivolrous” to mean that it can be duplicated. It can change hands without depriving its use to the earlier  person. However, when we look at the valuation of data we are confronted with two conflicting valuation dilemmas.

Firstly in case of “Confidential” information, the sharing of data dilutes the value. Some times it destroys the value of the data completely. For example “Password” is one data which when shared will destroy its value completely.

On the other hand, some data such as “News”  or “Education” increases in value when it is available for access by many. Hence a Data Valuer needs to classify the data properly before assigning the value to the data under this law.

The second law “Value increases with use” is reflected in the type of content mentioned above.

For example if no body knows that there exists a certain data, it cannot have a value. A Classic example is the DGPSI framework which is today known only to a few in India and its value is limited to the recognition of this set of people. If it is known to more number of people, its value would correspondingly increase. This is because it is a “Data Asset” which is meant to be used by other data users like a software.

The third law that “Information is perishable” is relevant for Personal Data valuation and has been used in the DVSI model because the permission to use the data by the Data Fiduciary is dependent on the Consent or legitimate use. The utility value of the data vanishes once the consent expires. In the data category of “News” the data may become stale for the news reader while for an investigative researcher, there may be a premium value in the “Forgotten Data”. A classic example is some of the articles on naavi.org which may be 25+ years old but for some body who wants to track the legislative history of Cyber Laws in India, it is a treasure.

This principle that the value of data may depend on the context and the audience is part of the first hypothesis of Naavi’s theory of data that “Data Itself is in the beholder’s eyes” and therefore the “Value of Data also in the beholder’s eyes”.

This means that the valuation of data has to be tagged with a “Context” weightage.

The fourth law that value of information increases with accuracy is from the context of its usage. There can be some instances such as the “Anonymised Data” where accurate data is masked for a specific purpose  and though the accuracy of the data is deliberately reduced, the value may be preserved or even enhanced because the data can be used for  purposes other than to which the accurate data could have been used.

The fifth law that the value of information increases when combined  with other information is well noted not  only because data from one division may be useful for another  division in an organization but also because the entire Data Engineering and Data Analytics industry revolves around the synthesis of data  and generating new insights.

However in a personal data context, where permissions are “Purpose limited”, use of data collected for one purpose may not be automatically usable for another purpose and this may conflict with this observation of Moody and Marsh. It is however fine with non personal data.

The sixth law that  there is a “Overload Pont” after which “more data is not necessarily better” since “Data Fatigue” may set in. Where laws are different for different scales of operation (eg: “Significant” social media intermediary or “Significant Data Fiduciary””, beyond a overload point, new obligations may come in changing the usage pattern of the data.

The seventh law “Information is not depletable”  is an interesting observations since the more we use a certain data, the usage pattern itself becomes additional meta data that enriches the core data. Again this has to be seen along with the “Data usage license” such as a Digital Library license which is number of use based (from the user perspective) or expiry of permission or “limitation of use under law”.

Thus the seven laws of data valuation indicated by Moody and Walsh is an interesting study which can be compared with the implications of Naavi’s theory of data and the DVSI model.

Request academicians to study this relationship further.

Naavi

…continued

Posted in Privacy | Leave a comment

Data Valuation Methods-1

P.S: This is a continuation of a series of articles on Data Valuation.  Discussing Data Valuation is mixed up with Data Monetization and could cause serious conflicts with the concept of Privacy and Data Protection. These are ignored   in the current context and we continue our discussion with the belief that these conflicts can be effectively managed both legally and ethically.

“Measure your Data, Treasure your Data” is the motto underlying in the DGPSI framework of compliance “. There are two specific “Model Implementation Specifications” (MIS) in the framework (DGPSI Full Version with 50 MIS) which are related to data valuation. (DGPSI=Digital Governance and Protection Standard of India)

MIS 9: Organization shall establish an appropriate  policy to recognize the financial value of data and assign a notional financial value to each data set and bring appropriate visibility to the value of personal data assets managed by the organization to the relevant stakeholders.

MIS 13: Organization shall establish a Policy for Data Monetization in a manner compliant with law.

Recognizing the monetary value of data provides a visible purpose and logic for investment  in data protection. It is not recommended to create reserves and distribute. Similarly “Data Monetization” is not meant to defeat the Privacy of an individual but to ensure that the revenue generation is in accordance with the regulation.

Leaving the discussion on the Privacy issues of data monetization to another day, let us focus on the issue of “Data Valuation”.

Data can be both personal and non personal. It can also be “Quasi” personal in the form of pseudonymised data and de-identified data. It can also be “Anonymised personal data”. For the purpose of DGPSI for DPDPA/GDPR compliance, we recommend “Anonymised” data to be considered as “Non Personal Data”. At the same time in the light of the Digital Omnibus Proposal, pseudonymised or de-identified data may also be considered as outside the purview of “Personal Data” for GDPR/DPDPA compliance in the hands of the controller/fiduciary who does not possess the mapping  of the pseudonymised/de-identified data with the real identifiable data.

Data can be further enriched through data analytics and it may become an “Insight”. Such “insights ” can be created both from Non Personal Data as well as permitted Personal Data. Such data will have an IPR component also.

The “Possibility” of  conversion with the use of various techniques where by a pseudonymised, de-identified or anonymised personal data to an identifiable personal data is considered as a potential third party cyber crime activity and unless there is negligence on the part of the controller who discloses the data in the converted state with the belief that it is not identifiable, he should be absolved of inappropriate disclosure.

Further, even personal data with “Appropriate Consent” should be considered as data that can be monetized and therefore have value both for own use and  for marketing. (P.S: Appropriate Consent in this context  may mean “Witnessed or Digitally Signed Contractual document without any ambiguity”…to be discussed separately). Such data may be considered as “Marketable Personal Data”. Just as “Data can be “Sensitive”, “Consent” can be “Secured”. (Concept of Secured Consent is explained in a different context).

For the purpose of data valuation, both personal and non personal data is relevant. The K Goplakrishnan committee (KGC) did explore a mechanism to render non personal data be valued and exchanged in a stock market kind of data exchange. However in the frenzy for the Privacy Protection, the data protection law was limited to personal data and the KGC report was abandoned.

Currently when the Comptroller and Auditor’s General (CAG) advised PSU s  to recognize a fair valuation of its “Data Assets”, it has become necessary to value both personal and non personal data as part of the corporate assets.

This will enable PSU s to realize value at least for Non Personal Data (NPD) and “Marketed /Monetized Personal Data (MPD)” .

While there are many ways by which Data can be valued, one of the practical methods would be the Cost  of Acquisition method. This is a simple “Cost Accounting” based method and least controversial.

In this method we need to identify the “Data Asset”, trace its life cycle within the organization and assign cost to every process that is involved in acquiring or creating it.  Such data asset can be “Bought” as a finished product from a Data analytics company or acquired in a “Raw” state and converted into a “Consumable State” with some in-house processing into a “Consumable State” which is like a “Finished Product”. If this is consumed entirely within the Company and also stored for future use within the Company, it remains a valuable data asset which generates “Income” for the organization.

In this scenario, there can be a valuation method based on income generation under the well known Discounted Cash Flow or Net Present Value method which can be used to  refine the cost of acquisition based valuation.

If the organization would like to transfer the consumable finished data product to another organization, a real market value could be recognized either as a cash inflow or as a transfer price.  Then the market value could also be an indicator for refining the value of the data held as an asset by the organization.

With these three methods valuation of data can be refined with appropriate weightages being assigned to the different values that arise for the same data set.

In the case of “Personal Data”, we had already addressed some valuation issues in the DVSI (Data Valuation Standard of India) which was a primitive attempt to generate a personal data valuation model where the data protection law could add a potential “Risk Investment” on the data. It recognized the value modifications arising out of the depth and age of the  data. For the time being let us consider them as refinements  that can be made to the “Intrinsic Value” assigned on the basis of “Cost of Acquisition”.

Hence we consider “Cost of Acquisition” as the fundamental concept of Data Valuation and the emerging cost would be considered as the “Intrinsic Cost” of the data. We shall proceed from here to consider a “Data Valuation Framework” as an addendum to DGPSI framework and leave the refinement  of data valuation to a parallel exercise to be developed by more academic debate.

Naavi

...Discussion Continues

 

Posted in Privacy | Leave a comment