Second Anniversary of DPDPA 2023 to be celebrated.. Be present for some useful information.

On 11th August 2025, it will be two years since the DPDPA 2023 was gazetted with the proviso that different provisions of the Act would come into effect on different dates to be notified.  In January 2025,Government released a set of “Draft Rules” for public comments which reportedly received 6915 feedbacks.

The  industry is raising one objection after another trying to postpone the notification of the Act.

The Privacy Activists had their objection to the proposed amendment to RTI Act which the Government has now safely deposited with a request for opinion from the Attorney General for retrieval at its discretion.

In the meantime the Digital Payment Companies such as  NPCI  have sought exemption from the “Consent” provision for repeat transactions. This is a critical aspect of consent in financial transactions and prone to misuse. DPDPA requires one consent for one purpose even if multiple payments are required. It can therefore be applied for repeated consents for the same purpose. Even if we take  a strict view that each payment is a “Different Purpose”, the requirement for user confirmation is part of the current transaction permission through OTP and hence there is no need for exemption.

Thus it appears that the delay in the implementation of the law is a deliberate attempt by “Data Thieves” to delay the implementation and nothing  else.

On 11th August 2025, at 7.00 pm, we shall have  an online LinkedIn presentation where two aspects will be discussed.

  1. A Case study of an AI platform is stealing data worth Rs 36 crores each month and sending the money to a Tax Haven
  2. The Preliminary version of DGPSI-AI, which is a DPDPA Compliance framework for Data Fiduciaries using an AI process.

Link for registering for the event  is available here.

P.S: The case study referred to above refers to a forensic extraction of some information by a whistle blower which indicates a large scale continuing data theft and also an indication of a preparation of these Data Thieves to bribe the DPDPA enforcement machinery even before it is set up and a more alarming scheme of “Silencing the Whistle Blower” through planting of narcotics in his car or causing a suicide etc., which needs investigation by ED and CBI. The Whistle blower is ready to share his information with the investigating authorities if interested.

Naavi

Posted in Privacy | Leave a comment

NIXI has killed the “Dot In” domain name

The development of  Country codes in top level domain names started first with the crowding of English domain names in gTLDs such as .com. It was also a concern that sovereign Governments did not have adequate control on the domain name registrations. Hence ICANN introduced the Country Code TLDs and permitted sovereign Governments to set up the technical infrastructure to manage the country code domain registration as well as dispute resolution.

India adopted .in domain registration system in 2004 along with .co.in, .org.in, .net.in at economic fee structure. It also introduced  .gov.in for Government use and adopted measures for reservation of domain names to protect trademarks.

The system is being managed by National Internet Exchange of India (NIXI) which is a section 8 company established in 2003.

Philosophically, the .IN ccTLD embodies India’s digital sovereignty and national identity online. Extending beyond mere addressing, .IN—and its 15-script internationalized counterparts (e.g., .भारत in Devanagari, .ભારત in Gujarati)—serve to:

  • Foster digital inclusion across India’s linguistic and cultural diversity

  • Provide a trusted Indian namespace for businesses, government, and citizens

  • Enable a multilingual Internet that mirrors India’s sociolinguistic fabric

Government‐led measures to promote .IN have included:

  • Sunrise and premium name auctions to secure brand names and raise registry revenue

  • Registrar incentives and festive offers (e.g., discounted or free first-year registrations) announced by MeitY and NIXI to accelerate adoption

  • Awareness campaigns highlighting .IN as a marker of “Made in India” digital identity

  • Universal Acceptance support through NIXI’s BhashaNet initiative, ensuring all scripts and domain lengths resolve seamlessly across applications

Recent developments include:

  • Surpassing 4.1 million .IN registrations (targeting 5 million) and entering the top-10 global ccTLDs by zone count

  • Launching IDN ccTLDs in all 22 scheduled Indian languages, making India the only ccTLD offering 15 localized scripts

  • Rolling out festive promotional offers for accredited registrars and free personalized .IN email services (10 GB storage) to users

  • Expanding Internet Exchange Points (IXPs) from four major nodes to 77 nationwide, improving local traffic routing, reducing latency, and lowering bandwidth costs for ISPs

  • Operating IRINN to allocate IPv4/IPv6 addresses (now > 80% IPv6 coverage) and planning “second-tier” NIXI hubs in partnership with state governments to serve smaller ISPs

  • Collaborating with CCA on NIXI SSL CA to issue domestically trusted SSL/Digital Signature certificates and reduce reliance on foreign providers

Through these measures, India’s .IN ccTLD has evolved from a restricted, bureaucratic registry into a dynamic, market-oriented namespace that underpins national digital identity, fosters multilingual inclusion, and reinforces digital sovereignty under NIXI’s governance.

Currently NIXI generates large surplus and its recent financial performance indicates that in FY 2022-23, it created a surplus of Rs 85.73 crores. The secretary MeitY is the Chairman of NIXI and the Board of Directors mainly consist of joint secretaries of MeitY. The CERT-In Director General is also a Co-opted Director.

The entire management therefore is Government owned and hence NIXI is a Government owned Company and is subject to the jurisdiction of CVC, CAG and RTI.

Domain Name Acquisitions

According to the NIXI website, under the clause 12(2) of the terms of registration,

 “The .IN Registry reserves the right to instruct its Registry Services Provider to deny, cancel, transfer or otherwise make unavailable any registration that it deems necessary or place any domain name(s) on registry lock and/or put a domain name on hold in its discretion :

(1) to protect the integrity and stability of .IN Registry;

(2) to comply with any applicable laws, Indian government rules or requirements, requests of law enforcement, in compliance with any dispute resolution process;

(3) to avoid any liability, civil or criminal, on the part of the .IN Registry, as well as its affiliates, subsidiaries, officers, directors, representatives and employees;

(4) for violations of this Agreement; or

(5) to correct mistakes made by Registry or any Registrar in connection with a domain name registration. The Registry also reserves the right to freeze a domain name during resolution of a dispute pending before arbitrator(s) appointed under Registry’s Domain Name Resolution Policy and/or a court of competent jurisdiction”.

According to a document on Anti-Abuse on the NIXI website the type of domains acquired by NIXI forcefully include the following:

Dit.in
Mit.in
DeitY.in
Mygov.in
Newindia.in
Govts.in
iaf.in
G20.in
nseindia.in
nse.in
bank.in
fin.in
school.in
alumani.in
Kpkb.in

It is not clear under what “Abuse” reasons these were acquired by NIXI. There has to be a documented reason for each of these acquisitions preceded by a request, inquiry and a decision.

However, the recent notice issued in respect of dpdpa.in owned by Ujvala Consultants Pvt Ltd, of which Naavi is the Managing Director indicates  that there is no system in place for such acquisitions to be conducted in a legally approved manner.

With the DPDPA 2023 passed as an act where Personal Data is provided an option to be “Nominated”, there is a legal recognition that “Data” is “Property”. In respect of domain names, though the right is created out of a contractual agreement, a “Domain Name” is considered as a “Trademark type of property”. The registrant therefore has ownership rights and builds a legal activity around the domain name. It may be a commercial activity under .in .

When .in domain name was launched, naavi quickly adopted to the use of .in domain names for his activities and promoted the use of .in instead of .com wherever feasible. The current move of NIXI to start acquiring .in properties without justification and refusing to use .gov.in domains for the Government will erode the confidence of business on .in domain names. It  has now become necessary for Indian business to keep a back up of .com for every .in domain name they register since NIXI may pounce on them at any point of time. Probably instead of registering both domains, they would prefer to use .com domain name itself and build a brand since  .com domain names are better reflected in search engines.

Public will notice that  this action against “dpdpa.in” is exclusive to .in domain name and not extend to any other  domain names such as dpdpa.com .  This is not to suggest that it should be done, but to indicate the development of a perception that if you are in .com you are safer from domain name acquisition risk.

NIXI is therefore killing the .in domain name movement  from which it has created a profit of Rs 85 crores last year.

The “Acquisition” therefore  has to be considered as acquisition of property of a private citizen by the Government falling under the rights of “Article 300A” of the constitution. The domain name also represents a means of “Expression” and hence acquisition of domain names is a direct curtailment of the freedom of expression under Article 19 of the Indian Constitution.

Acquisition of domain names has to be considered as an infringement of fundamental rights protected under the Constitution and amenable for being questioned in Supreme Court of India under Article 32 of the constitution. It can also be questioned under Article 226 of the Constitution in an appropriate High Court.

If  the “Acquisition” is held arbitrary and not proportional to any “abuse”, the Government has to rescind the acquisition and also pay adequate compensation for the infringement. The officials responsible for such infringement may be liable to be punished for “Breach of Trust” or for other similar reasons.

In the case of dpdpa.in, NIXI sent a undigitally signed e-mail to the Registrant and the Registrar placing the domain name under “Server lock” and also indicating that after 5 days it would initiate transfer.  There was no “Show Cause”  notice a document showing a “Reason” for exercising this extraordinary powers.

It merely stated that “This is to inform you that Govt. of India desires to get the domain dpdpa.in registered for itself. ” It quoted the  clause 12(2) of the terms and conditions for registrants and declared

“Should you need any clarification on this matter, please feel free to contact us within 05 working days and .IN Registry shall initiate the transfer of the domain dpdpa.in to Govt. of India thereafter.”

In other words, there has been no explanation on why this action is being initiated and whether there was any illegal activity traced to dpdpa.in . No copy of the  instruction from the Government of India expressing its “Desire” was furnished.

Even if the Government of India “Desires” the decision cannot be based on “desires” and it is an “Emergency” mindset which seriously erodes the democratic nature of the Modi Government.

I therefore have raised an objection and issued a digitally signed e-mail notice to NIXI under copy to other departments of the Government including Meity and PMO.

Since I have not so far received any reply from NIXI except a phone call from one Mr Rajiv requesting avoiding of social media posts in this regard, I am with lot of regret initiating action to escalate the dispute to a Court of Law since the notice of NIXI was dated 1st August and the 5 working days may end on 7th August which is two days from now.

Naavi has been a Netizen activist from 1998, was the first to  help the law enforcement in bringing the historically first conviction under ITA 2000,(2004) first civil order from Adjudication against a Bank in  a phishing transaction, (2008-2022) submit the first Section 65B certificate to a Court of law (2004)and is recognized as a pioneer in the Cyber Law scenario in India. Of late Naavi has been focussing more on Data Protection and DPDPA compliance, his pioneering work continues in the good interests of the country. Naavi has demonstrated his patriotic credentials much more than any private individuals and his war against Cyber Pornography, War against Digital Corruption through Bit coins is well documented.

Naavi had earlier launched “Cyber Law Awareness Movement” to spread the awareness and knowledge about ITA 2000 and is presently spreading the awareness and knowledge on DPDPA highlighting the need for compliance in the industries.

It now appears that Naavi has to launch a new initiative on “NIXI Dadagiri” and raise a slogan “Nahi Chalega, Nahi Chalega, NIXI Dadagiri”.

Right from my student days, I have been known to oppose strikes and such negative slogans but at my age, NIXI is forcing me to give up my productive activities and start an “Andolan” against the emergency mindset of NIXI.

I request professionals to support me in this initiative and start an email campaign by sending an email to ceo@nixi.in or to the Chairman of NIXI or Any of the Directors opposing “Acquisition  of Private  .in domain names in an arbitrary fashion like what they have exhibited in the case of dpdpa.in”

Please send such an email with the headline: We oppose NIXI Dadagiri with the content

“I object to the arbitrary domain name acquisition of dpdpa.in and any other .in domain name by the Government without a proper justification”.

This campaign may run till NIXI rescinds its notice.

Naavi

Posted in Privacy | Leave a comment

DGPSI-AI Principles..A summary

DGPSI-AI is a  Comprehensive Framework for AI Governance and DPDPA Compliance developed by Naavi .

This is  a forward-looking initiative to bridge the gap between artificial intelligence and data protection containing a series of principles . This framework is an extension of the existing Data Governance and Protection System of India (DGPSI) and is specifically designed to guide organizations in ensuring their use of AI is compliant with India’s Digital Personal Data Protection Act (DPDPA).

The DGPSI-AI framework is built upon six core principles: “Unknown Risk is Significant Risk,” Accountability, Explainability, Responsibility, Security, and Ethics. Together, these principles aim to provide a robust structure for the ethical and lawful deployment of AI systems.

Principle 1: “Unknown Risk is Significant Risk”

The foundational principle of the DGPSI-AI framework posits that any process involving AI—defined as an autonomous software capable of modifying its behavior without human intervention— inherently carries an “unknown risk.” This is because AI, particularly self-correcting software, can evolve in unpredictable ways, potentially leading to unforeseen and catastrophic outcomes. Unlike traditional software where risks are generally identifiable and manageable through testing, AI’s ability to autonomously alter its code introduces a level of uncertainty.

This principle suggests that any organization deploying AI should be automatically classified as a “Significant Data Fiduciary” under the DPDPA. This classification mandates more stringent compliance requirements, including the necessity of conducting Data Protection Impact Assessments (DPIAs), appointing a Data Protection Officer (DPO), and undergoing data audits. Downgrading this risk classification would require substantial documentation and explicit assurances from the AI developer.

Principle 2: Accountability

The principle of Accountability is central to AI governance. Within the DGPSI-AI framework, it establishes that autonomous AI systems must be accountable to the Data Fiduciary. Since an AI algorithm cannot be held legally responsible as a juridical entity, the accountability rests with the human element behind it. This could be an individual or a corporate entity, aligning with Section 11 of the Information Technology Act, 2000, which holds the person causing an automated system to act responsible for its actions.

Implementation of this principle involves two key actions. Firstly, a mandated digital signature from the developer should be embedded in the AI’s code, creating a “chain of AI ownership.” Secondly, for every AI system, a designated human “Handler” or “AI Owner” must be disclosed. This ensures that while for external purposes, there is a clearly identified responsible party (the DPO or a compliance officer),  internally, a specific process owner is assigned accountability.

Principle 3: Explainability

The third principle, Explainability, addresses the “black box” nature of many AI systems. It requires that organizations can provide clear and accessible reasons for the outputs generated by their AI. This is crucial for building trust and is a key component of transparency, a fundamental tenet of data protection law. The ability to explain an AI’s decision-making process is vital for the Data Fiduciary to fulfill its obligations to data principals.

Explainability is not only about transparency but also about risk management. If a Data Fiduciary cannot explain how an AI functions, the full accountability for its consequences may shift to the developer or licensor, who would then be considered a Joint Data Fiduciary. Real-world applications of explainability are seen in financial services for loan decisions, in healthcare for diagnoses, and in human resources for recruitment, ensuring that decisions are fair, unbiased, and justifiable.

Principle 4: Responsibility

The principle of “Responsible AI Usage” emphasizes that the deployment of AI should primarily benefit the data principals and not solely serve the profit motives of the Data Fiduciary. This aligns with international principles such as the OECD’s “Inclusive Growth” and UNESCO’s principles of “necessity and proportionality.” The use of AI should be justified by the value it adds over non-AI processes, and this justification must be documented.

Organizations are expected to create an “AI use justification document” that outlines the purpose of the AI, a cost-benefit analysis comparing it to traditional methods, and evidence that the value proposition could not be achieved otherwise. This ensures that AI is not adopted merely for fashion but for genuine business and societal needs, with the welfare of the data principal at the forefront.

Principle 5: Security

Security within the DGPSI-AI framework extends beyond typical cybersecurity to encompass the prevention of harm caused by the AI algorithm itself. The principle recognizes three main areas of risk to the data principal: potential physical harm, mental manipulation through “dark patterns,” and deeper neurological manipulation.

Given the “unknown” nature of AI risks, the Data Fiduciary must assume legal liability for any consequences. This necessitates obtaining assurances from the developer regarding rigorous testing and augmenting this with a “Liability” admission clause, supported by adequate insurance. The framework mandates that every AI algorithm should be insured against causing physical, mental, or neurological damage to users.

Principle 6: Ethics

The final principle of Ethics urges organizations to operate beyond the strict confines of written law and consider the broader societal good. This is particularly relevant in the current landscape where specific AI legislation is still developing. The DPDPA’s definition of a “Fiduciary” already implies an ethical duty to protect the interests of the data principal, and this principle extends that duty to AI processes.

Ethical considerations are to be identified through a thorough risk assessment process. The framework suggests that “Post Market Monitoring,” similar to the EU AI Act, can be an ethical practice where the impact of AI on data principals is monitored even after the initial processing is complete. Another ethical consideration is the concept of “Data Fading,” where the AI could, for instance, ask for consent at the end of each session to retain the learnings from that interaction, treating immediate processing and future reuse as distinct purposes requiring separate consent.

In conclusion, the six principles of DGPSI-AI provide a comprehensive governance model that appears to encompass the core tenets of major international AI frameworks, including those from the OECD, UNESCO, and the EU AI Act. As these principles are further developed and refined through feedback, they stand to offer a crucial roadmap for organizations navigating the complex intersection of AI innovation and data protection in India.

Posted in Privacy | Leave a comment

DGPSI-AI Principle-6: Ethics

The first principle we hear whenever we speak about AI Governance principles is “Ethical and Responsible AI”.

We have explored different dimensions of Responsible AI in the form of assuming risk responsibility, accountability, explainability. “Ethics” become relevant when there is no specific law to follow. For some more time, India will have to work without a specific law on AI and has to manage with  a Jurisprudential outlook on some principles of ITA 2000 and DPDPA.

Hence “Ethical Approach” which goes beyond the written law and addresses what is good for the society is relevant for India. When we alluded to “Neuro Manipulation Prevention” as an objective of “Security of AI” in the previous article.

The “Ethical Principle” basically urges an organization to go beyond the written law and address other issues of the society. The definition of “”Fiduciary” under DPDPA requires an entity to assume a “Duty” to manage the Personal data in such a manner that it secures the interest of the data principal. Hence “Ethics” is already embedded in the DGPSI-Full /Lite principles. The extension to the AI Process is therefore automatic.

The “Ethical” requirements can only be identified through a “Risk Assessment Process” where some risks may appear stretching the law a little far. Hence when an auditor prepares a “Gap Assessment” based on the Model Implementation Specifications and the auditee data fiduciary absorbs certain risks and creates an “Adapted Implementation Specification” for the auditor to probe with evidence through a “Deviation Justification Document”, the difference between what is “Ethical” and what is “Statutorily Mandatory” is flagged.

Similarly when an assessment is made on an AI, the first Gap Assessment may follow the principle of ethics with “utmost care”. Subsequently  a “Deviation Justification Document for AI Deployment” may be prepared to adapt the model implementation specifications of  DGPSI AI with modifications guided by the “Risk Absorption” decisions of the management.

The “Post Market Monitoring” referred to in the EU AI Act is a principle that can be used by Data Fiduciaries  following ethical considerations when they can monitor the impact of the AI on the Data Principals even after the purpose of processing is deemed to be complete which may require a review of the data retention.  Hence when an AI is allowed to store personal data after processing, the data fiduciary shall monitor the requirement  of  data retention at periodical intervals and purge them when the need no longer  exists.

We have earlier discussed the concept of  “Data Fading” principles  for a developer. This  can be  another “Ethical Requirement” that can be  adopted when the AI deployer continues the training of the model with internal data . Alternatively, at the end of each user session a question can be posed

“Can the learnings of this session be retained for future use or deleted”.

This would treat the immediate processing as a different process compared to the “Storing” and “Re-use of the stored data” as different purposes for which a new consent is used. This can be done periodically or at the end of every process cycle.

It appears that  the six principles of DGPSI-AI could suffice to cover all the AI Governance principles covered under OECD principles, UNESCO Principles, EU-AI act Principles  as well as the Australian “Model Contractual Clauses principles”. We can continue to explore the Model Implementation specifications under these principles to complete the development  of DGPSI-AI framework.

Naavi

(P.S: Readers may appreciate that the concepts of DGPSI-AI are under development  and requirements of refinements are recognized. Your comments will help us in the process)

Posted in Privacy | Leave a comment

DGPSI-AI Principle 5: Security

So  far we have discussed four principles of DGPSI-AI, a framework for compliance of DPDPA by a AI Deployer. We will discuss the responsibilities of the Developer subsequently.

They are

a) “Unknown Risk is Significant Risk”

b) Accountability

c) Explainability

d)Responsibility

The principles are basically discussed in the AI Deployer’s scenario and are an extension of the DGPSI -Full framework.

To summarize, what we have discussed,

the principle of unknown risk is significant  Risk suggests that an organization deploying AI should consider itself handling a “Significant Risk” and therefore the AI-Process should be considered as a “Significant Data Fiduciary” process requiring a “DPIA”, “DPO” and “Data Auditor” to meet the compliance requirements. The principle of “Accountability”  extends the first principle by requiring designation of a “AI Handler” as a human responsible for the consequences of an AI. The “Explainability” principle further requires that a deployer documents the functioning of the AI process with  respect to how the output is achieved. Since the functioning of the AI is determined by the developer who may hold back the code with himself, the fulfillment of “Explainability” obligation between the Data fiduciary and the data principal needs support of a proper contractual document between the Data Fiduciary and the supplier of the AI  tool. The fourth principle of “Responsible deployment of AI” requires a justification document on the necessity and proportionality of the value addition that the Data Fiduciary intends to achieve  in deploying the AI.

The next principle which we need to adopt as part of DGPSI-AI is the “Security”. In this context what is meant by security is that AI as an algorithm shall not create harm to the data principal whose data is processed. The classification system adopted by EU-AI is solely based on the “Anticipated Risk to the Data Principal”. The Risk that we need to recognize to the data principal is the potential physical harm if any, potential mental manipulation in terms of what we normally recognize as “Dark Patterns” and thirdly the deeper manipulation of the human brain which is part of the Neuro Rights regulation.

Physical harm is predominant when AI is used in robots both humanoid robots and industrial robots. Since humanoid robots are made of steel in most cases, the physical strength of the device is strong enough to create significant physical damage if the humanoid robot misbehaves.

We can recall how one Chess Robot crushed the finger of an opponent player who made a wrong move. Similarly there are instances of industrial robots dropping material on a worker to crush him to death , going rogue physically , the BINA 48 episode desiring a nuclear attack and taking over the world,.

Thus AI has to be secured  both for physical security, digital security and Neuro Security. However given the fact that AI Risk is “UNKNOWN”, the management of physical security arising out of deployment of AI is also constrained by the unknown nature  of the risk.

From the compliance point of view, the Data Fiduciary has to assume legal liability for the consequences, take appropriate assurances from the developer for successful testing at the developmental stage and hope so that he can claim “Reasonable Security”.

Identification and acknowledgement of physical risks, Dark Patterns and Neuro Manipulation Risk is considered part of the disclosure of a Privacy Notice involving AI usage under the DGPSI-AI principle. This is more like the “Statutory Warning” necessary but not sufficient. Hence it is augmented by a “Liability” admission clause supported by a suitable insurance for liability.

In other words, every AI algorithm shall be insured against causing any damages to the user either physically or mentally or neurologically. Watch out for a list of Implementation specifications further expanding on the principles.

Naavi

Posted in Privacy | Leave a comment

DGPSI-AI principle-4: Responsibility

The first three principles of DGPSI-AI namely the “Unknown Risk”, “Accountability” and “Explainability have been discussed in the previous posts.

Now we shall take up the principle of “Responsible AI Usage” which reflects the OECD principle of “Inclusive Growth” as well as the UNESCO principle of “necessity and proportionality”.

The “Responsible Use” means that the Data Fiduciary shall ensure that the use of AI shall not cause unnecessary harm to the data principles. Usage of AI should be more for the benefit of the Data Principles  and not for profit making by the Data Fiduciary.

DGPSI -Full framework has a suggestion of implementation specifications namely the  “Monetization Policy” (MIS 13) read with “Data Valuation Policy” (MIS 9).  According to these, processing of personal data has to recognize the value of data before and after processing. The monetization policy has to explain what is being monetized and how.

The UNESCO principle of Responsibility states “

The use of AI systems shall be governed by the principle of ‘necessity and proportionality’. AI systems, in particular, should not be used for social scoring or mass surveillance purposes;”

Here the principle takes into account the sensitivity of the processed data. In the monetization policy of DGPSI-Full, processes such as social scoring or surveillance has compliance implications as well as a “Change of Value” of data during processing which should reflect the higher value of the sensitive personal data assuming that it was duly permitted by the data principle.

Hence the DGPSI-AI can absorb this UNESCO principle by stating

” The value addition to the AI processing shall be sufficient enough for the Data Fiduciary against  a non AI process and shall be supported by appropriate consent.

Such a principle will also meet the  ethical obligations such as that the primary benefit of the AI use should flow to the Data Principals.

This “Value addition Justification Principle” means that  if the Data Fiduciary has means of achieving his data processing economically through a non-AI process, the need to absorb the “Unknown Risk” of the AI may be  not necessary.

Use of AI should not be just for the sake of fashion and should be justified by an “AI use justification document”.  This document  should specify the purpose  of use of the AI, and the value proposition which cannot be otherwise achieved at the same cost.

Such a document  shall contain

  1. Purpose of AI Use

    • Clear articulation of why AI is necessary for the specific data processing objective

    • Identification of the problem or opportunity that AI addresses

  2. Value Proposition Analysis

    • Quantified benefits that AI processing provides over traditional methods

    • Cost-benefit analysis comparing AI versus non-AI approaches

    • Demonstration that equivalent value cannot be achieved at the same cost through conventional processing

  3. Necessity Assessment

    • Evidence that the organization lacks viable non-AI alternatives for achieving the same processing objectives

    • Economic justification for absorbing the “Unknown Risk” inherent in AI systems

      Organizations implementing this principle should:

      1. Conduct Value Addition Assessments before implementing AI systems

      2. Document Justifications for choosing AI over traditional processing methods

      3. Regularly Review AI implementations to ensure continued justification

      4. Monitor Impact on data principals to prevent unintended harm

      5. Maintain Transparency about monetization and value creation from personal data processing

      The Responsible AI Usage principle thus ensures that AI deployment serves genuine business and social needs rather than merely following technological trends, while maintaining focus on data principal welfare as the primary justification for processing personal data through potentially risky autonomous systems.

      The  “Responsibility” Principle at the developer’s end may have a slightly different perspective since it has to incorporate fairness in the development  and testing process. Hence there could be some difference in the application of this principle between the Developer and the Deployer.

Naavi

Posted in Privacy | Leave a comment