Fundamental Rights Impact Assessment (FRIA)

We refer to the DGPSI-AI framework which is an extension of DGPSI framework for DPDPA compliance. DGPSI-AI covers the requirements of a AI deployer who is a data fiduciary and has obligations  under the DPDPA.

This framework which is a pioneering effort in India on how voluntary AI regulation could be brought in through the framework though we may not have a full fledged law on AI in place. In EU, the EUAI Act was a comprehensive legislation to regulate AI usage. Naturally it did impact the data protection scenario also.

It was one of the requirements of the EU-AI act (Article 27) that before deploying high Risk AI systems deployers shall conduct an “Impact Assessment”. This “Impact Assessment” was directed towards fundamental rights that the use  of such systems may impact. This was called the FRIA or the Fundamental Rights Impact Assessment to be conducted on the first use of the system. This framework may also impact the implementation of DORA. (EU Digital Operations Resilience Act).

Now the European Center for Not for Profit Law (ECNL) and Danish Institute for Human Rights (DIHR) have released a template for FRIA which is an interesting document to study. This may be compared with the DGPSI AI which has six principles and nine implementation specifications for AI deployers.

A Copy of the report is available here

In the DGPSI AI, one of the principles is that

“The responsibility of the AI deployer as a “Fiduciary” shall ensure all measures to safeguard the society from any adverse effect arising out of the use of the AI.”

Further, the implementation specifications include

The deployer of an AI software in the capacity of a Data Fiduciary shall document a  Risk Assessment of the Software obtaining a confirmation from the vendor that the software can be classified as ‘AI’ based on  whether the software leverages autonomous learning algorithms or probabilistic models to adapt its behaviour and generate outputs not fully predetermined by explicit code. This shall be treated as DPIA for the AI  process.

The DPIA shall be augmented with periodical external Data Auditor’s evaluation at least once a year.

The FRIA, suggested by the ECNL is considered similar to a DPIA and highlights  the  requirement to document the impact on the fundamental rights. It could be  a guideline for conducting DPIAs under  GDPR.

While DGPSI-AI focusses more on the compliance to DPDPA 2023 while FRIA tries to focus directly o the “Fundamental Rights”.

A template released in this context lists 51 fundamental Rights over which the assessment.

Following questions are suggested to be explored across this canvas.

1.How can the deployment of the AI system negatively affect people?
2.What drives these negative outcomes?
3.Which people/groups are negatively affected in this scenario?
4.What is the fundamental right negatively impacted?
5.What is the extent of the interference with the respective fundamental right?
6.How would you assess the likelihood of this interference? Explain your answers)
7.What is the geographic scope of the negative impact ?
8.Within the group identified, how many people will be negatively impacted?
9..Amongst this group, are there people in situation of vulnerability such as children, elderly, people with disabilities, low-income households, racial/ethnic minorities? (Explain your answers)
10. What is the gravity of the harm that affected people might experience?
11. What is the irreversibility of the harm
12.What is the level of prioritisation for this impact?
13.List the actions already in place that can prevent and mitigate the negative impact
14. List all the additional actions that should be taken to prevent and mitigate the negative impacts.

The template suggests allocation of scores for each fundamental right and their aggregation.

While we appreciate the comprehensive nature of the guideline, there  is one fundamental principle that the regulators and advisors should follow. It is that any Compliance has to be designed to be as simple as possible. Making it complicated will only add a disproportionate cost or rejection.

The DGPSI-AI has tried to avoid this trap and leave it to the auditor to decide what risks are to be factored. Again by aligning the DGPSI framework to the “Compliance of DPDPA”, it has  been ensured that business entities need not be experts on the interpretation of the constitution on fundamental rights but focus on the law which has been passed by the Parliament.

Making an assessment of the impact of an  AI on a canvas of 51 fundamental rights is a humongous task and perhaps not practical. India has only 6 fundamental rights and out of which “Right to Privacy” which is carved out of “Right to Life and Liberty” is relevant for DPDPA Compliance and is included in DGPSI.

In the DGPSI-GDPR framework, there are the  following four implementation specifications to address the issue of Ai deployment.

  1. Organization shall establish an appropriate policy to identify the AI risks with its own processing and processing with data processors, and obtain appropriate assurances
  2. Organization shall establish an appropriate  policy to ensure that there is an accountable human handler  in the organization ,in the AI vendor organization and AI deploying  data processor organization
  3. Organization shall adopt AI deployment based on specific documented requirement.
  4. All AI usage shall be supported by an appropriate guardrails to mitigate the risks.

While we accept that the four specifications of DGPSI-GDPR addressing AI requirements may not be as comprehensive as  the ECNL  framework, we hope it is more practical.

I would be happy to receive the views and comments of the experts.

Naavi

 

Posted in Privacy | Leave a comment

Business Model Maturity Index Method of Data Valuation

In continuing our discussion on Data Valuation methods, we have come across an interesting discussion called “Business Model Maturity Index method”.

This method tries to assess the organization’s Capability focussing on data and analytics by progressing through defined levels such as “Chaotic”, “Awareness” “Defined” “Managed”, “optimized” etc.

We have discussed a version of this thought in expanding the “Theory of Data”  and more particularly when discussing the  (life cycle of data) (Also refer here).

Mr Bill Schmarzo, an author and educator in the domain of Big data has developed the concept of Big Data Business Model maturity index to help organizations measure how effectively they were at leveraging data and analytics to power their business models.

This approach suggests growth from the Business Monitoring phase  to gaining Business insights,  and business optimization, Business monetization, data monetization leading to business metamorphosis to a data  driven  organization.

These academic thoughts are evolving world over and India needs to catch  up with  the developments. Perhaps FDPPI will lead this academic exercise. If any academic institution join hands, perhaps we can make further progress.

Naavi

Also Refer here

 

 

Posted in Privacy | Leave a comment

Data Valuation Methods-3

Continued

We have discussed the different data valuation methods which were discussed by the Australian economists Moody and Walsh in our previous post.

The first step in data valuation in an organization starts with creation of the Data Inventory. Then they need to be classified. DGPSI discusses extensively how a centralized personal data inventory can be created and the classifications that are required for DPDPA Compliance.

For the purpose of Data Valuation, we still need the inventory of data but the classification system has to follow a different  pattern.

Firstly Data Valuation for an organization needs to cover both the Non personal Data and the Personal Data. Currently Non  Personal Data is classified on the basis of Information Security requirements and Personal Data on the basis of the data protection laws.

The “Value perspective” for both Non personal data and personal data is different.

As regards the personal data, DVSI outlines the method of adding  depth, age and sensitivity weightages to the intrinsic value of the personal data set.

In the case of Non personal data there are data related to finance, marketing, administration, production and Governance. Some marketing data such as market reports and surveys may be bought or subscribed. There are specific costs associated with  them which can be traced.

The weighted value additions are however a matter of expert views based on the knowledge of how data impacts the business.

In the case of public sector organizations, or NGOs, not all data usage can eb traced to profit making. There is a “Social Cost Benefit” associated with the activities of the organization. Capturing the creation of such value to the society as the value of the output data is a challenge.

The cost based method therefore  becomes the method which can be used by most organizations since there is no need for social cost benefit. However, “Revaluation” based costing can be done in some cases to correct the cost based estimates.

Once the method for finding out the intrinsic cost is finalized with or without the refinements such as adding weights for different factors, the organization can value the data assets as on the date of the balance sheet and also monitor its movement at periodical intervals.

As a framework however, there would be a need for a Governance structure with a “Data Valuation Committee” supported by a “Data Valuation Officer” taking the responsibility for making appropriate policies and implementing them.

When DGPSI is used as the framework, the Data Valuation and Data Monetization are to be supported by  appropriate policies along with the clearance of the consent, legitimate use or exemption.

While taking a consent, for Data Monetization, it is  considered necessary for the organization to take a special consent which may be properly authenticated and preferably witnessed by a third party with a declaration such as

“The data principal confirms that he/she understands that this consent covers monetization of the personal data and has provided a free consent for the same. The consent is witnessed by …………………, who confirms that the nature of this consent has been explained to  the data principal and he has understood it before agreeing to provide the consent.”

This is the beginning of  the data valuation era in India and the suggestions contained here in will be refined as required.

At present, Naavi’s Ujvala Consultants Pvt Ltd is open to accepting Data Valuation projects  and interested entities may contact Naavi for  details.

Naavi

Posted in Privacy | Leave a comment

Data Valuation Methods…2

Continued

In developing a DGPSI-Data Valuation framework in support of DGPSI-Full version, let us now explore some globally prevailing thoughts on Data Valuation.

In 1999, two Australian researchers Daniel L Moody and Peter Walsh published a paper  “Measuring the value of Information An asset Valuation Approach”. In this paper they discussed what are referred to as the “Seven Laws of Data Valuation”. This concept covered the following nature of data that could contribute to its valuation

  1. Data Is infinitely shareable
  2. Value increases with Use
  3. Information is perishable
  4. Value increases with Accuracy
  5. Value increases with synergy
  6. Value of data increases with more data upto an overload  point but  may trip later.
  7. Information is self generating.

Moody and Walsh also described how Data Can be “A raw material” which when used with Software and hardware as plant and equipment produces an information as the end product.

In this concept, Software itself is an asset  which like a catalyst works with input data to produce an output data while it remains unchanged. The self learning AI algorithm is a distinct category of data which cannibalizes the output data to transform itself as it evolves as a tool.

“Data” as software represents  an asset like a “Fixed Asset” in the user organization while it could be a “Finished Data Product” in the software company.

When “Data” is used in 3D printing, data as input (3D scan of the object) combines with data as software for 3D printing and physical raw material (The component that is printed as an object) to result in the Physical Object as  an end  project. The software remains as a re-usable element  for the next production of a different product. The data used for the design also remains as a by product which can be re-used if a similar object has to be printed once again.

There is another kind of Data which is also used as a “Fixed Asset”. It is the “Content” that is used to generate value by subscription or carrying advertisements. The advertisement  revenue or the subscription revenue depends on how good is the content and how it is itself promoted. An advertising “of the content” advertises “the content” where other “advertisements” are embedded. The valuation of such assets need to take this into account. Such assets are also amenable for depreciation and sensitive to time and accuracy of the content.

Valuing such content needs to take into account these complex web of revenue generation possibilities with time value. They are better suited for the Discounted Cash Flow (DCF) or Net Present Value (NPV) method of valuation than the Cost of Acquisition method.

Hence some data assets are more amenable to Cost of Acquisition (COA) method while some are more amenable to DCF/NPV method.  Some may require frequent adjustments of time value of content to the extent that a “Revaluation Method” is more  acceptable.

Naavi has earlier propounded a “Theory of Data” from which  many of the above 7 laws can be implied. Naavi’s theory had been built on three hypothesis namely “Data Is in the beholder’s eyes”, “Data has a reversible life cycle” and “Changes in the Value of data  during the life cycle belongs to different  contributors).

In the Puttawamy Judgement, Justice Chandrachud remarked that “Data is non-rivolrous” to mean that it can be duplicated. It can change hands without depriving its use to the earlier  person. However, when we look at the valuation of data we are confronted with two conflicting valuation dilemmas.

Firstly in case of “Confidential” information, the sharing of data dilutes the value. Some times it destroys the value of the data completely. For example “Password” is one data which when shared will destroy its value completely.

On the other hand, some data such as “News”  or “Education” increases in value when it is available for access by many. Hence a Data Valuer needs to classify the data properly before assigning the value to the data under this law.

The second law “Value increases with use” is reflected in the type of content mentioned above.

For example if no body knows that there exists a certain data, it cannot have a value. A Classic example is the DGPSI framework which is today known only to a few in India and its value is limited to the recognition of this set of people. If it is known to more number of people, its value would correspondingly increase. This is because it is a “Data Asset” which is meant to be used by other data users like a software.

The third law that “Information is perishable” is relevant for Personal Data valuation and has been used in the DVSI model because the permission to use the data by the Data Fiduciary is dependent on the Consent or legitimate use. The utility value of the data vanishes once the consent expires. In the data category of “News” the data may become stale for the news reader while for an investigative researcher, there may be a premium value in the “Forgotten Data”. A classic example is some of the articles on naavi.org which may be 25+ years old but for some body who wants to track the legislative history of Cyber Laws in India, it is a treasure.

This principle that the value of data may depend on the context and the audience is part of the first hypothesis of Naavi’s theory of data that “Data Itself is in the beholder’s eyes” and therefore the “Value of Data also in the beholder’s eyes”.

This means that the valuation of data has to be tagged with a “Context” weightage.

The fourth law that value of information increases with accuracy is from the context of its usage. There can be some instances such as the “Anonymised Data” where accurate data is masked for a specific purpose  and though the accuracy of the data is deliberately reduced, the value may be preserved or even enhanced because the data can be used for  purposes other than to which the accurate data could have been used.

The fifth law that the value of information increases when combined  with other information is well noted not  only because data from one division may be useful for another  division in an organization but also because the entire Data Engineering and Data Analytics industry revolves around the synthesis of data  and generating new insights.

However in a personal data context, where permissions are “Purpose limited”, use of data collected for one purpose may not be automatically usable for another purpose and this may conflict with this observation of Moody and Marsh. It is however fine with non personal data.

The sixth law that  there is a “Overload Pont” after which “more data is not necessarily better” since “Data Fatigue” may set in. Where laws are different for different scales of operation (eg: “Significant” social media intermediary or “Significant Data Fiduciary””, beyond a overload point, new obligations may come in changing the usage pattern of the data.

The seventh law “Information is not depletable”  is an interesting observations since the more we use a certain data, the usage pattern itself becomes additional meta data that enriches the core data. Again this has to be seen along with the “Data usage license” such as a Digital Library license which is number of use based (from the user perspective) or expiry of permission or “limitation of use under law”.

Thus the seven laws of data valuation indicated by Moody and Walsh is an interesting study which can be compared with the implications of Naavi’s theory of data and the DVSI model.

Request academicians to study this relationship further.

Naavi

…continued

Posted in Privacy | Leave a comment

Data Valuation Methods-1

P.S: This is a continuation of a series of articles on Data Valuation.  Discussing Data Valuation is mixed up with Data Monetization and could cause serious conflicts with the concept of Privacy and Data Protection. These are ignored   in the current context and we continue our discussion with the belief that these conflicts can be effectively managed both legally and ethically.

“Measure your Data, Treasure your Data” is the motto underlying in the DGPSI framework of compliance “. There are two specific “Model Implementation Specifications” (MIS) in the framework (DGPSI Full Version with 50 MIS) which are related to data valuation. (DGPSI=Digital Governance and Protection Standard of India)

MIS 9: Organization shall establish an appropriate  policy to recognize the financial value of data and assign a notional financial value to each data set and bring appropriate visibility to the value of personal data assets managed by the organization to the relevant stakeholders.

MIS 13: Organization shall establish a Policy for Data Monetization in a manner compliant with law.

Recognizing the monetary value of data provides a visible purpose and logic for investment  in data protection. It is not recommended to create reserves and distribute. Similarly “Data Monetization” is not meant to defeat the Privacy of an individual but to ensure that the revenue generation is in accordance with the regulation.

Leaving the discussion on the Privacy issues of data monetization to another day, let us focus on the issue of “Data Valuation”.

Data can be both personal and non personal. It can also be “Quasi” personal in the form of pseudonymised data and de-identified data. It can also be “Anonymised personal data”. For the purpose of DGPSI for DPDPA/GDPR compliance, we recommend “Anonymised” data to be considered as “Non Personal Data”. At the same time in the light of the Digital Omnibus Proposal, pseudonymised or de-identified data may also be considered as outside the purview of “Personal Data” for GDPR/DPDPA compliance in the hands of the controller/fiduciary who does not possess the mapping  of the pseudonymised/de-identified data with the real identifiable data.

Data can be further enriched through data analytics and it may become an “Insight”. Such “insights ” can be created both from Non Personal Data as well as permitted Personal Data. Such data will have an IPR component also.

The “Possibility” of  conversion with the use of various techniques where by a pseudonymised, de-identified or anonymised personal data to an identifiable personal data is considered as a potential third party cyber crime activity and unless there is negligence on the part of the controller who discloses the data in the converted state with the belief that it is not identifiable, he should be absolved of inappropriate disclosure.

Further, even personal data with “Appropriate Consent” should be considered as data that can be monetized and therefore have value both for own use and  for marketing. (P.S: Appropriate Consent in this context  may mean “Witnessed or Digitally Signed Contractual document without any ambiguity”…to be discussed separately). Such data may be considered as “Marketable Personal Data”. Just as “Data can be “Sensitive”, “Consent” can be “Secured”. (Concept of Secured Consent is explained in a different context).

For the purpose of data valuation, both personal and non personal data is relevant. The K Goplakrishnan committee (KGC) did explore a mechanism to render non personal data be valued and exchanged in a stock market kind of data exchange. However in the frenzy for the Privacy Protection, the data protection law was limited to personal data and the KGC report was abandoned.

Currently when the Comptroller and Auditor’s General (CAG) advised PSU s  to recognize a fair valuation of its “Data Assets”, it has become necessary to value both personal and non personal data as part of the corporate assets.

This will enable PSU s to realize value at least for Non Personal Data (NPD) and “Marketed /Monetized Personal Data (MPD)” .

While there are many ways by which Data can be valued, one of the practical methods would be the Cost  of Acquisition method. This is a simple “Cost Accounting” based method and least controversial.

In this method we need to identify the “Data Asset”, trace its life cycle within the organization and assign cost to every process that is involved in acquiring or creating it.  Such data asset can be “Bought” as a finished product from a Data analytics company or acquired in a “Raw” state and converted into a “Consumable State” with some in-house processing into a “Consumable State” which is like a “Finished Product”. If this is consumed entirely within the Company and also stored for future use within the Company, it remains a valuable data asset which generates “Income” for the organization.

In this scenario, there can be a valuation method based on income generation under the well known Discounted Cash Flow or Net Present Value method which can be used to  refine the cost of acquisition based valuation.

If the organization would like to transfer the consumable finished data product to another organization, a real market value could be recognized either as a cash inflow or as a transfer price.  Then the market value could also be an indicator for refining the value of the data held as an asset by the organization.

With these three methods valuation of data can be refined with appropriate weightages being assigned to the different values that arise for the same data set.

In the case of “Personal Data”, we had already addressed some valuation issues in the DVSI (Data Valuation Standard of India) which was a primitive attempt to generate a personal data valuation model where the data protection law could add a potential “Risk Investment” on the data. It recognized the value modifications arising out of the depth and age of the  data. For the time being let us consider them as refinements  that can be made to the “Intrinsic Value” assigned on the basis of “Cost of Acquisition”.

Hence we consider “Cost of Acquisition” as the fundamental concept of Data Valuation and the emerging cost would be considered as the “Intrinsic Cost” of the data. We shall proceed from here to consider a “Data Valuation Framework” as an addendum to DGPSI framework and leave the refinement  of data valuation to a parallel exercise to be developed by more academic debate.

Naavi

...Discussion Continues

 

Posted in Privacy | Leave a comment

“Measure your data, Treasure your data” A movement for the year 2026

The DGPSI (Data Governance and Protection Standard of India) as a framework for DPDPA Compliance adopted the Principle that Data as an asset must be recognized with monetary value which should also be rendered visible. Accordingly, one of the implementation specifications adopted by DGPSI framework (Full Version) was

“Organization shall establish an appropriate  policy to recognize the financial value of data and assign a notional financial value to each data set and bring appropriate visibility to the value of personal data assets managed by the organization to the relevant stakeholders.”

The concept of the DGPSI framework was first born as PDPSI or Personal Data Protection Standard of India in 2019. It was supported by the “Naavi’s Theory of Data” which recognize a “Value” for Data which could vary during the lifecycle of the data processing and different owners could be recognized for different value parts of the data. It also recognized that Data value is linked to the capability of the user since “Data is in the beholder’s eyes”.  By 2021, a model for Data Valuation evolved  for professional discussion.

The industry however was not ready to take cognizance of the Data Valuation as a Governance principle and the DGPSI provision remained only a suggestion.

The year 2025 has been a momentous year in India with the notification of DPDPA 2023 setting a time line for its implementation. Now the industry has taken DPDPA 2023 seriously and is trying to work towards compliance. The DGPSI framework has been a leading Governance tool of compliance which can be used for implementation as well as audit and assessment.

In the meantime, it is the PSUs which seem to have taken the first step in documenting the value of the Data Assets thanks to the initiative taken by the CAG. CAG has realized that there is no point in merely raising a slogan that “Data is the New Oil” and there is a necessity to recognize the financial value of data and make it visible in the accounting system.

We fully endorse this view. and in the year 2026 have taken a New Year Resolution that we shall work towards a movement to popularize the concept of Data Valuation and help the industry to arrive at a reasonably acceptable methodology for making this possible.

“Measure Your Data, Treasure Your Data” will be the motto that will drive this movement and add a new life to the DGPSI framework.

Join hands with  Naavi and FDPPI to make this movement a grand success.

One of the first activities under this would be  Round Table in Bangalore… watch out for the date…and participate.

Naavi

Posted in Privacy | Leave a comment