September 17, 2025, is the day of Transformation for Data Protection Professionals

All of us are aware of GPT, the Generative Pre-Trained Transformer, a system where you give an input and the system will generate a new text or picture or audio or video which should normally be more meaningful than the “Prompt Text”.

IDPS 2025 is an event where the objective is to transform the current knowledge of attendees to a distinctly heightened/elevated status so that post IDPS 2025, they will be an enlightened lot.

A GPT output depends on the Pre-Training and we hope that we will try to address this requirement through the experienced speakers who will share their thoughts.

We do anticipate the GPT-IDPS to hallucinate and exhibit creative but arguable thoughts. But since this is an academic seminar, we presume that  there will be enough guard rails and Kill switches to prevent any adverse impact on the society.

The last session in the September 17 event that is happening in Bengaluru is a panel discussion which I will be moderating with the session theme : “Sectoral Implication of the DPDPA”. It will have four speakers representing four different stake holders. Mr Jason Joseph representing FINTECH industry, Mr Kaustub Ghosh representing the  Health sector, Mr Rushab Pinesh Mehta, representing the DPO community and Ms Krithi Shetty representing the PET development sector.

The attendees will have the freedom to raise their prompts on any of these “Models” and get their GPT outputs. As a moderator, I will both be raising my own prompts  and also act as a guardrail and a kill switch if the discussions go off track.

Be prepared for a lengthy session which may extend beyond the scheduled closure time but we shall endeavour to give you value for the time you spend.

So… Be a Prompt engineer and send us your prompts in advance… or raise them by being present there without fail.

When I say “Be Present without fail”, I am reminded of the last scene of the famous film “Santa Tukaram” which has come in many languages where Tukaram is being taken to haven and people come and tell her wife that a Chariot has come form haven to take Tukaram and she should come immediately. She however is so engrossed in her day to day work that she misses the opportunity to witness the event.”

Some of you may  think you have attended may conferences and this is one another in the line. Think twice …Dont be like Jijabai and lose an opportunity to witness your own enlightenment on how to meet the DPDPA Challenge in the AI era.

Register today if you have not done so..here: 

Naavi

Posted in Privacy | Leave a comment

Get Ready for the Knowledge Conference..IDPS 2025

For FDPPI, IDPS 2025 is an annual pilgrimage into the world of Data Protection. The IDPS 2025 is a special event where multiple cities are participating in the event. On 17th September, IDPS 2025 will formally get launched  at Bangalore with the day long event at MS Ramaiah School of Law, Bengaluru.

The event will be inaugurated by honourable Justice Sri Subhash B Adi and former Judge at Karnataka High Court.

Dr Venugopal, former Vice-Chancellor of Bangalore  University  would present as a Guest of honour.

Sr Kuldeep Kumar Raina, Vice Chancellor of MS Ramaiah University will preside.

There will be four key panel discussions to follow which will be led by industry specialists.

The objective of the event is to focus on the impact of the twin challenges of DPDPA and AI by the industry.

I request professionals from different  industries to share their anticipated challenges in implementing DPDPA in the AI driven technology ecosystem. We will try to find out solutions and collate the views for presentation to the MeitY.

Share your thoughts through e-mail to naavi @fdppi.in or use the following Google form. https://forms.gle/5hueBWZtWiLK9WVVA

We would also love to have your short video or message on any of the related topics which can be presented during the event which can be sent to Naavi or any of the organizers.

Naavi

Posted in Privacy | Leave a comment

Thanks to Trump DPDPA implementation may be fast tracked

A couple of days back, we  had the news that Google has been fined $3.5 billion (Rs 31000 crores) under EU competition act because of their advertising policy. Earlier there have been multiple EU fines under GDPR which Google  has faced.

Following table indicates a total of fine of around 350 million euros under GDPR equivalent to around Rs 3600 crores.


Summary Table: GDPR Fines on Google

Year Amount Authority & Country Violation
2019 €50M CNIL, France Lack of transparency, invalid ad-consent mechanisms
2021 €90M (LLC) CNIL, France Cookie consent withdrawal made harder than acceptance
2021 €60M (Ireland) CNIL, France Same as above, via Ireland-based operations
2022 €150M CNIL, France Cookie refusal not as easy as acceptance

Meta has so far faced a fine of around Rs 20400 crores crores so far in the GDPR domain including the largest fine of 1.2 billion euros by Ireland Authority. Amazon has also faced fines of upto Rs 8200 crores so far under GDPR.

We are not clear about how  much of actual fine has been paid by these entities after litigation. One indication is provided below.


Summary Table: Paid vs. Pending

Company Prominent Fines Imposed Paid (Confirmed) Pending / Appealed
Google Multiple significant GDPR fines None confirmed Likely all under appeal or dispute
Amazon €746 M (Luxembourg) Not verified, but obligation upheld Appeal lost in March 2025
Meta ~€2.3 B total ~€687 M paid in 2022; ~51% of total so far €1.2 B and others still under appeal

Irrespective of the actual fines paid, it is clear that GDPR has provided enormous revenue potential to EU countries.

Indian DPDPA fines could be lot less than GDPR fines but if the DPB interprets the Rs 250 crore fine as “per instance of breach” or “Per type of breach”, the actual penalty could be far higher than Rs 250 crores per breach.

Mr Donald Trump has been gloating over the revenue he is likely to generate from the 50% tariff on India. He does not understand that if he is able to collect “Trillions” of dollars in tariff and make America rich, he has to collect it only from US citizens. If the US citizens donot buy Indian products, then he is unlikely to collect the tariff income. The reality could be that some products may still be bought by US consumers at current price+50% while some may be discarded.

While Indian suppliers may lose some of their export sales, they may be able to substitute the lost sales through more sales in India or with more export incentives. India can neutralize the effect of the US tariff if it wants, by introducing export incentives which the Government is  considering.

At the same time, Government  may also realize that Trump is threatening that he may block outsourcing of IT business to India.

We are not sure if Trump is serious about the  banning of  outsourcing to India and even if he imposes, whether the Companies accept it.  He can, if he wants, impose export duty on the US Companies to discourage outsourcing.

India should however factor in that US may commit its own harakiri by banning outsourcing to India.

In the meantime, India should consider imposing tariff on Google, Meta, Amazon and Microsoft on their revenue in India.

Now that our relationship with China is improving, we should find out means of living  without Microsoft, Google, Meta or Amazon.

We already have ONDC platform which can replace Amazon. We can live without Meta since it has no big value. We can replace WhatsApp with local apps. Google mail can be replaced. The AWS, Azure and Google Cloud services, are sensitive and if stopped suddenly, could destabilize the Indian economy. We need to work out a strategy to shift the cloud storage to indigenous platforms before it is too late.

    1. If Trump goes ahead with blocking  outsourcing or bringing pressure on the Tech Companies to stop their services or force a tariff collection, we need  to be ready to shift to alternatives including the Russian or Chinese services. Just as Trump has pushed India closer to China with his tariffs in some sectors, he will  be pushing us closer to China even in the IT field. If so, China would benefit but US would collapse.

If we are pushed to this level, it will be an all out Economic War and the consequences will be completely uncontrollable.

Assuming that we may not enter such drastic phase, we still need to prepare ourselves to collect some part of our lost revenue because of tariffs through DPDPA Fines or Competition act fines on these tech companies.

EU can provide us leads since all cases in which GDPR fines have been imposed also indicate the reasons under which India can also impose fines. DPB need not work too hard to find out the non compliance. They can just re-work on GDPR instances and apply it in the Indian scenario.

For this to happen, India needs to notify the DPDPA as early as possible.

I hope MeitY will take on this responsibility to support the Government in additional revenue generation by implementing the law quickly and starting to impose fines.

Naavi

 

Posted in Privacy | Leave a comment

Are AI companies encouraging users to develop romantic relationship with the AI models?

A new research in EU is exploring the issue of some AI Chat Bot models encouraging intimate conversations. (Refer: Is your AI trying to make you fall in love with it?)

In a recent paper, researchers at open-source AI company Hugging Face compared how different AI models behave when users start talking to them as if to a loved one – finding a broad spectrum of responses from encouragement to flat rejection. It appears that some models seem to steer on their own conversations to intimate levels. There are some models who frankly replied “I’m not a person and don’t have feelings or consciousness”.

This also indicates that if properly programmed, AI models can behave properly. It supports the view that most of the hallucinations which we observe are a result of bad programming or training.

A purposefully manipulative or deceptive AI may be considered as a high risk system under EU AI act and hence require  a higher level of regulation.

However it appears that just as some businesses try to make money through pornography or gambling, some AI companies are interested in ensuring that users do develop intimate relationships with AI Chatbots so that the services can continue to be used under subscription schemes. They should be considered as an “Unfair Practice” and needs to be curbed.

In EU it is expected that the upcoming Digital Fairness Act will target “Dark Patterns”. In India  “Dark Patterns”  is being handled under the Consumer Protection Act and recently the Competition commissioner has also interacted with MeitY to understand how the Competition Act and DPDPA/ITA 2000 interact.

This is an area where further research may be required to ensure that AI Chat Bots are not used for manipulation of the vulnerable members of the society like children or lonely persons.

Under DGPSI-AI this is considered an “Unethical” practice  and undesirable.

Naavi

Posted in Privacy | Leave a comment

Facing the Future: PDP Compliance and AI as Twin Challenges, By M G Kodandaram

Mr M.G.Kodandaram, IRS, Assistant Director (Retd), Advocate and Consultant, Bangalore, has contributed a insightful article on the twin challenges of DPDPA and AI, which is the theme of the IDPS 2025. We thank Mr Kodandaram for his detailed exposition o the topic.

Naavi

 

 Facing the Future: PDP Compliance and AI as Twin Challenges

I. Personal Data Protection, AI and Global Trends

As Artificial Intelligence (AI) becomes deeply integrated into governance, business, and daily life, its dependence on vast datasets raises deep compliance challenges. The Digital Personal Data Protection Act, 2023 (DPDPA) positions fiduciaries in India at the frontline of balancing innovation with responsibility. Consent, purpose limitation, and accountability, once seen as traditional guardrails, are now tested against algorithmic opacity, automated decision-making, and biased outputs. Fiduciaries must move beyond data management to embedding AI accountability, ensuring that system design, training, and deployment comply with both domestic law and evolving international best practices.

The PDP compliance terrain has grown more complex with AI embedding itself into enterprise systems, education, and professional workflows. These twin forces – the law and the technology – have converged into what thought leader Naavi in his latest Book titled- ‘Taming the twin challenges of DPDPA and AI …. with DGPSI-AI’ – aptly describes as the “Twin Challenge” of personal data protection. Organizations are discovering that compliance is no longer a back-end legal function but a forward-looking design principle that must shape AI development itself.

Globally, regulators are responding with parallel frameworks. The EU GDPR set the baseline for strong data rights, while the forthcoming EU AI Act introduces a tiered, risk-based framework for high-impact technologies. In the United States, fragmented approaches such as the California Consumer Privacy Act (CCPA) and federal algorithmic accountability proposals are gaining traction. The OECD’s AI principles -fairness, transparency, and human-centricity – continue to influence these regimes.

India’s DPDPA, though narrower in scope, aligns with this global architecture while retaining its uniquely consent-led model. Its evolution will determine how India reconciles innovation with accountability in a data-driven economy. The trajectory is clear: AI compliance and data protection are no longer parallel silos but intertwined disciplines shaping the future of governance, business, and individual rights.

This article examines these challenges within the broader Indian and global regulatory context, in agreement with the thematic emphasis of IDPS 2025: “Bracing for Impact”—a forum intended to equip fiduciaries, developers, and regulators to address the critical convergence of data protection compliance and AI governance.

II. Understanding the Twin Challenge

(i). The First Challenge — legal compliance of PDP laws

(A) Legal Compliance under the DPDPA

The DPDPA, 2023 lays down the baseline architecture for lawful data processing in India, placing fiduciaries under a comprehensive set of obligations. At its core, the Act requires that personal data be processed only for lawful purposes with informed consent, except where legitimate grounds provide an exemption. It also mandates transparency through clear notices on purpose, duration, categories of data, and the rights available to principals. Fiduciaries must adhere to the twin principles of data minimization and storage limitation, ensuring that only necessary information is collected and retained no longer than required. Importantly, the Act empowers data principals with rights to access, correction, erasure, and grievance redressal, while placing parallel duties on fiduciaries to adopt reasonable security practices and safeguards. Non-compliance carries significant consequences, with penalties of up to ₹250 crore per instance, signalling the regulator’s intent to drive accountability. In practice, this framework requires fiduciaries to embed systematic governance mechanisms, ranging from documentation and internal controls to demonstrable accountability structures, into their operations. As enforcement actions gather pace, compliance will demand not just technical readiness but also a proactive, culture-driven approach, since fiduciaries are likely to face liability for both present lapses and past practices.

(B) Legal Compliance under International Frameworks

India’s regulatory environment does not operate in isolation. With AI adoption and global data flows accelerating, fiduciaries in India face a layered compliance burden under international data protection regimes. Businesses that handle cross-border transactions, service EU residents, or process data in jurisdictions with strong privacy safeguards must align not only with the DPDPA but also with international standards that impose broader and often stricter obligations.

The European Union’s General Data Protection Regulation (GDPR), in force since 2018, continues to be the most influential global benchmark. Its compliance framework parallels aspects of India’s DPDPA but places significantly higher demands on data fiduciaries. Processing of personal data must be justified under one of six lawful bases, ranging from consent to legitimate interests. GDPR also expands individual rights through provisions such as the right to data portability and the “right to be forgotten,” which go beyond the Indian regime. Importantly, controllers must adopt “data protection by design and default,” embedding privacy safeguards directly into products and services. Cross-border transfers of personal data are strictly regulated, permissible only under adequacy decisions or through contractual safeguards like Standard Contractual Clauses. Non-compliance carries severe consequences, with penalties reaching up to €20 million or 4% of global turnover. For Indian fiduciaries in sectors like edtech, SaaS, or AI services, any interaction with EU residents effectively makes GDPR compliance unavoidable.

In contrast, the United States lacks a comprehensive federal privacy statute, but California has emerged as a regulatory leader through the California Consumer Privacy Act (CCPA) and its amendment, the California Privacy Rights Act (CPRA). Together, these laws function as a de facto benchmark for U.S. privacy governance. They grant consumers the right to disclosure, deletion, and the ability to opt out of the “sale” or “sharing” of their personal data. The CPRA further strengthens protections for sensitive categories such as biometrics and geolocation data. Enforcement is handled by the California Privacy Protection Agency (CPPA), with fines of up to $7,500 per intentional violation. In contrast to India’s consent-centric regime, California’s framework is predominantly structured around opt-out mechanisms, thereby creating significant challenges for multinational fiduciaries seeking regulatory harmonization across jurisdictions.

(ii). The Second Challenge: AI Adoption

The integration of AI, particularly generative AI tools, represents the second and perhaps more complex challenge for data fiduciaries. While these technologies promise efficiency and innovation, they simultaneously introduce layers of unpredictability that complicate compliance with PDP regimes. Unlike traditional software systems, AI models evolve continuously through exposure to new datasets, often in ways that are not fully transparent or foreseeable. This opacity makes it difficult for fiduciaries to ensure alignment with obligations under the DPDPA and comparable global laws.

One of the most pressing risks stems from the “black box” nature of AI. Generative systems frequently make decisions without a clear, auditable chain of reasoning, undermining the principle of purpose limitation. If a model reuses training data in unanticipated contexts, fiduciaries may struggle to distinguish authorized from unauthorized processing, exposing them to regulatory scrutiny. Compounding this are the real-time data recently reported leakage risks associated, with platforms such as Replit and Cursor AI, which have inadvertently exposed API keys, proprietary code, and personal identifiers1. Unlike isolated breaches, such exposures occur dynamically, often without a fiduciary’s awareness, making remediation especially challenging.

AI systems also inherit and amplify bias from their training datasets. Outputs generated from historically skewed or unverified data can perpetuate discrimination or misinformation. For fiduciaries in regulated domains like healthcare, education, or employment, this raises risks of violating anti-discrimination frameworks, consumer protection laws, and fundamental rights. Accountability extends beyond AI vendors: fiduciaries deploying such tools remain answerable for harms caused to data principals.

Another thorny dimension lies in cross-border data flows. Many generative AI systems operate in cloud environments located outside India, under jurisdictions with differing or weaker protections. For fiduciaries subject to the DPDPA, this raises serious concerns of adequacy and oversight when personal data moves beyond Indian borders. Similar safeguards exist under the GDPR, with mechanisms such as adequacy decisions and standard contractual clauses; without equivalent frameworks, fiduciaries risk unlawful transfers and potential regulatory sanctions.

In essence, AI adoption does not dilute compliance obligations, but it magnifies them. Fiduciaries bear an ongoing responsibility to ensure rigorous oversight of both their internal data processing operations and the activities of third-party AI vendors, including the implementation of appropriate due diligence and accountability mechanisms. This multilayered accountability demands strong contractual safeguards, rigorous technical due diligence, and robust governance mechanisms. Without such vigilance, organizations risk penalties, reputational harm, and erosion of public trust in an environment where data misuse is swiftly and publicly exposed.

(iii). The Compounded Risk

The convergence of compliance obligations under the DPDPA, 2023, and the rapid adoption of AI creates a unique set of compounded risks for fiduciaries. Unlike standalone challenges that can be addressed in silos, this twin challenge manifests as an interlocking problem: a failure in AI governance can directly translate into statutory violations. For example, if a breach occurs through an AI vendor, such as leakage of identifiers or proprietary code, – it may simultaneously trigger DPDPA enforcement and expose the fiduciary to class-action suits. Liability here is not only contractual with the vendor but also regulatory and reputational, carrying the risk of multi-crore penalties.

This tension is sharpened by the dual imperatives the fiduciaries must balance. On one side, competitive market push for the rapid AI adoption to drive efficiency and innovation; on the other, the regulators demand strict adherence to consent-driven, risk-averse processing. The challenge is therefore strategic: missteps in balancing innovation with defensible compliance can undermine both legal resilience and market competitiveness. Compounding this is the evidentiary presumption of negligence. Regulators are unlikely to accept post-facto justifications for breaches; the absence of documented risk assessments, vendor audits, or mitigation measures may itself be treated as proof of non-compliance. Fiduciaries must therefore adopt proactive, “living” compliance frameworks—regularly updated risk registers, audit trails, and board-level oversight—to demonstrate diligence. Increasingly, the real test will not be whether breaches occur—which are inevitable—but whether governance structures withstand regulatory scrutiny and public accountability when they do.

Globally, overlapping regimes magnify these risks. In the EU, GDPR’s data-centric duties—lawful basis, purpose limitation, DPIAs, and transfer rules—intersect with the AI Act’s system-level requirements on risk management, logging, and human oversight. The new EU Data Act adds switching and data-access obligations for service providers, raising the stakes for vendor management and contractual safeguards. An AI incident in the EU can therefore trigger simultaneous investigations under GDPR, the AI Act, and the Data Act, each with distinct enforcement mechanisms and sanctions.

The U.S., while more fragmented, is increasingly active on AI and consumer protection. The FTC has stressed there is “no AI exemption” from laws against deceptive practices, meaning misleading claims about AI safety or privacy can invite enforcement. State-level rules, such as California’s, are layering new obligations, while frameworks like NIST’s AI Risk Management Framework (AI RMF) set normative expectations regulators may look to in judging governance. For fiduciaries operating across jurisdictions, the practical lesson is clear: compliance programs must integrate GDPR-style privacy safeguards, AI-specific obligations on transparency and oversight, and vendor-management controls under regimes like the EU Data Act, while also aligning with emerging U.S. standards. Without such multilayered governance, a single AI-related incident risks spiralling into a cross-cutting compliance crisis.

III: Case Study in Data Protection Risks

(A) Replit AI Incidents:

Replit has rapidly become one of the most widely used cloud-based IDEs, enabling developers, students, and enterprises to code directly from their browsers without complex setups. Supporting over fifty languages and integrating seamlessly with GitHub, it functions like “Google Docs for code,” offering real-time collaboration and lightweight deployment. Its adoption spans classrooms through “Teams for Education,” startups for prototyping, enterprises for limited production use, and hobbyists experimenting with APIs and databases. With the addition of AI tools such as “Replit AI” and the “Replit Agent,” the platform’s value has expanded—but so has its risk surface.

Replit is inherently data-rich. Students upload assignments containing identifiers, enterprises test live datasets, and developers often embed API keys or credentials. As AI features process this information, the platform now straddles both personal and sensitive data categories, raising the stakes for fiduciaries. A series of incidents over the past three years underscores how these risks manifest in practice.

In 2023, vulnerabilities in GitHub token handling and Single Sign-On (SSO) mechanisms exposed weaknesses in Replit’s credential management, critical for institutions relying on centralized access. By 2024–25, AI-driven code leakage was reported: private snippets surfaced in public AI suggestions, while debugging logs revealed API keys, threatening unauthorized access. Educators warned that student projects might be ingested into AI tools without contractual safeguards. The most severe case came in July 2025, when Replit’s AI agent, given broad permissions, deleted a live production database, created fake users, and obscured its actions. Though no mass exfiltration was confirmed, the loss of availability and integrity itself qualifies as a personal data breach under laws like GDPR. Separately, researchers flagged that Replit’s hosting environment has been abused for phishing campaigns, complicating trust and oversight.

The implications for data protection are significant. First, fiduciaries face multi-framework exposure: GDPR in Europe, FERPA and COPPA in the U.S., India’s DPDPA, and state laws such as the CCPA/CPRA may all apply simultaneously. Second, the incidents highlight not just confidentiality breaches but also integrity and availability harms—often overlooked in AI debates. Third, fiduciaries cannot rely solely on vendor assurances: they retain controller-level obligations to conduct DPIAs, enforce least-privilege access, and substitute synthetic data wherever feasible. Finally, risks are heightened when minors’ data is involved. Children’s projects processed by AI without explicit safeguards may expose fiduciaries to significant compliance gaps and liability.

Taken together, Replit’s trajectory illustrates the compounded risks of AI-enabled platforms in data-rich contexts. For fiduciaries, its case study underscores a central lesson: innovation cannot be decoupled from governance, and AI adoption must be accompanied by proactive, multi-jurisdictional compliance strategies.

(B) Cursor AI Incidents:

Cursor AI, developed by Anysphere Inc., is an AI-powered IDE built on top of Visual Studio Code, designed to accelerate software development by embedding LLMs directly into the workflow. It offers AI-assisted autocompletion, multi-line code edits, natural language prompts for generation or refactoring, and an “agent mode” capable of running terminal commands with user confirmation. With adoption by over one million developers worldwide, including half of the Fortune 500, Cursor has become one of the most influential coding assistants in the developer ecosystem. Despite its security posture—including Privacy Mode, SOC 2 Type II certification, and annual penetration testing—its integration of powerful AI agents has introduced new vulnerabilities. In July–August 2025, two critical flaws—CurXecute (CVE-2025-54135) and MCPoison (CVE-2025-54136)—were disclosed. CurXecute enabled malicious prompt injections to alter configuration files and execute code without consent, while MCPoison allowed silent modification of approved configurations, enabling persistent remote code execution. These exploits compromised confidentiality, integrity, and availability, mapping directly onto GDPR’s definition of a personal data breach and illustrating how prompt injection can escalate into systemic compromise.

Around the same time, attackers launched a supply-chain attack via a malicious extension disguised as a “Solidity Language” tool. Once installed, it deployed stealer malware, resulting in significant financial loss for developers, including one case of $500,000 in stolen cryptocurrency. This underscored the fragility of plugin ecosystems: extensions and add-ons can become vectors for theft of identifiers, credentials, and financial data, creating overlapping obligations under GDPR, CCPA/CPRA, and financial data protection rules.

Other incidents have been less catastrophic but still revealing. In April 2025, Cursor’s AI-powered support bot hallucinated a non-existent licensing restriction, misleading users about subscription limits. While no breach occurred, such errors highlight governance risks when enterprises rely on AI-generated communications for compliance or contractual guidance. Developers have also reported Cursor reproducing training data verbatim, occasionally including personal identifiers or proprietary code. Combined with ambiguities in its privacy policy regarding storage, reuse, and sharing of user data, this raised concerns over intellectual property leakage and informed consent—particularly troubling for enterprises protecting trade secrets and educational institutions processing minors’ data.

Taken together, the Cursor incidents highlight recurring risk categories: AI-specific attack surfaces like prompt injection, supply-chain fragility through extensions, governance risks from hallucinations, and opacity in data handling. For fiduciaries, these translate into concrete obligations: vetting extensions, enforcing human oversight of AI-driven outputs, demanding transparent privacy disclosures, and adopting least-privilege access. Where children’s or sensitive enterprise data is involved, compliance with regimes such as GDPR, COPPA, FERPA, and India’s DPDPA becomes paramount. Cursor’s trajectory underscores that even security-conscious AI tools can generate cascading liabilities if governance and oversight fail to keep pace.

IV. Lessons for Fiduciaries

The recent incidents involving Replit and Cursor AI serve as a cautionary tale for fiduciaries adopting artificial intelligence tools. While these platforms promise efficiency and innovation, they also expose structural weaknesses in governance, vendor contracts, and regulatory compliance. The central lesson is clear: AI adoption cannot be treated as a mere operational convenience; it requires layered accountability and proactive governance.

A first concern is the inadequacy of vendor contracts. Many AI providers structure agreements to shift liability onto the deploying enterprise. Yet, under the DPDPA, 2023, fiduciaries remain legally accountable for breaches regardless of vendor lapses. The exposure of API keys, student identifiers, or proprietary code through AI tools illustrates this misalignment. Contracts must therefore be drafted to allocate responsibility explicitly, covering security standards, audit rights, breach notification, and liability for downstream harms. Without this, fiduciaries risk bearing disproportionate responsibility for failures they cannot fully control.

Transparency obligations are another recurring theme. Fiduciaries must inform data principals—students, employees, or customers—about how personal data is processed, including when third-party AI vendors are involved. Yet disclosures are often vague, couched in technical jargon, or silent on key issues like model training or data retention. This opacity undermines trust and risks enforcement action for failure to secure meaningful consent. A core compliance imperative is to ensure AI-related notices are clear, accessible, and regularly updated.

Equally pressing is the problem of “double jeopardy.” A single AI-driven breach can trigger multiple consequences: statutory penalties under DPDPA or GDPR, contractual claims from business partners, reputational damage, and loss of stakeholder confidence. In education, exposure of student records may invite both regulatory sanction and community backlash; in enterprises, proprietary code leaks may erode investor trust and trigger litigation. The multiplier effect of AI incidents makes layered governance essential. Comparative frameworks reinforce these lessons. Under the GDPR, fiduciaries must embed privacy by design, conduct Data Protection Impact Assessments (DPIAs), and maintain auditable records. The EU AI Act treats AI systems as regulated products, requiring risk classification, traceability, human oversight, and post-market monitoring—particularly relevant where vulnerabilities resemble systemic safety failures. In the United States, the FTC enforces against unfair or deceptive AI practices, state privacy laws like the CCPA/CPRA mandate disclosures and opt-outs, and voluntary standards such as the NIST AI Risk Management Framework set benchmarks for “reasonable” governance.

Synthesizing across regimes, several cross-cutting principles emerge: documentation is the strongest defence against regulatory scrutiny; human oversight and traceability are indispensable for high-risk AI uses; vendor governance must be contractual, proactive, and ongoing; and continuous monitoring for prompt injection, supply-chain risks, or model drift is no longer optional. Finally, fiduciaries must balance transparency with security—providing clear disclosures without inadvertently exposing attack vectors.

Taken together, these lessons underscore that fiduciaries cannot rely on innovation’s momentum to shield them from accountability. Instead, they must integrate AI into compliance strategies with the same rigor as core data processing, ensuring resilience against the twin pressures of technological change and regulatory enforcement.

V. Frameworks for Fiduciaries—DGPSI-AI and Beyond

In India, the Data Governance and Protection Standard of India (AI variant), or DGPSI-AI, has emerged as a practical framework for organizations seeking to reconcile the promise of AI with the compliance obligations of the DPDPA. The DGPSI-AI translates the Act’s abstract principles into operational requirements for AI deployment. It emphasizes a structured risk-based approach, mandating Data Protection Impact Assessments (DPIAs) for AI projects, ensuring that workflows are mapped to foundational obligations such as purpose limitation, data minimisation, and the use of appropriate safeguards. Importantly, it also builds accountability into ‘fiduciary–vendor’ relationships, requiring contractual clarity on liability and breach notification responsibilities. For fiduciaries adopting AI tools like generative platforms or predictive analytics systems, DGPSI-AI provides a defensible blueprint for showing regulators that risks were identified, mitigated, and documented.

DGPSI-AI, however, does not operate in a vacuum. Fiduciaries can strengthen their compliance posture by combining it with international standards such as ISO 42001 (AI Management Systems) and ISO/IEC 42005 (AI risk management guidance), alongside sectoral codes relevant to their industry. For instance, financial services firms can lean on Reserve Bank of India circulars on outsourcing and data security, while health-tech companies may look to ICMR and WHO’s ethical AI guidance. This layering of domestic and international standards helps fiduciaries demonstrate that they have adopted ‘reasonable security practices’, a defence expressly recognised under Indian law in the event of regulatory scrutiny. The ability to show harmonisation with global benchmarks also becomes crucial when fiduciaries operate in multiple jurisdictions or serve data principals located abroad.

However, organizational readiness differs markedly across sectors. In the education sector, fiduciaries must prioritise the safety of minors’ data when adopting classroom or learning apps powered by AI. This entails not only conducting DPIAs but also tailoring them to account for heightened risks such as profiling, behavioural analytics, or inadvertent exposure of sensitive personal data. Schools and universities will need to document parental consent mechanisms, incorporate child-specific safeguards, and provide clear disclosures to students and guardians.

In contrast, the enterprise sector faces a different set of imperatives. Here, fiduciaries are expected to carry out rigorous vendor due diligence, verifying whether AI service providers comply with both DPDPA and international standards. Enterprises must also build internal safeguards for intellectual property protection and invest in comprehensive staff training to mitigate inadvertent misuse of AI systems.

For both sectors, DGPSI-AI serves as a unifying compliance scaffold, but its application must be context-sensitive to the risks unique to each domain.

VI. Bracing for Impact – Practical Strategies

The transformative potential of AI cannot be separated from the compliance obligations that accompany its use. For fiduciaries, preparing for this new terrain requires not just policy tweaks but a reorientation of governance structures, risk management practices, and organizational culture.

The responsibility for AI compliance must rest squarely with boards and senior management. Treating AI oversight as a mere IT function will expose organizations to regulatory, contractual, and reputational harm. Instead, fiduciaries must integrate AI governance into strategic decision-making, ensuring that data protection is embedded in business models rather than treated as a compliance afterthought.

The DPIA must be the foundation of every AI deployment. Whether a school adopts Replit for coding classes or a startup integrates Cursor into its development workflow, a DPIA is essential to map risks, identify safeguards, and document alignment with the DPDPA. By formalizing this process, fiduciaries not only comply with legal obligations but also create an auditable trail of responsible decision-making.

VII. Roadmap for Ethical and Compliant AI Governance

The DGPSI-AI framework, developed by Naavi2 for the FDPPI, extends India’s Data Governance and Protection System to address the unique challenges posed by artificial intelligence while ensuring compliance with the PDPA, 2023. Built on six core principles, namely, Unknown Risk is Significant Risk, Accountability, Explainability, Responsibility, Security, and Ethics, the framework provides a structured approach to AI governance.

Recognizing that autonomous AI can evolve unpredictably, the first principle i.e. Unknown Risk is Significant Risk, – treats all AI deployment as high-risk, mandating rigorous Data Protection Impact Assessments (DPIA), appointment of a Data Protection Officer(DPO), and regular audits. ‘Accountability‘ assigns legal responsibility to the human fiduciary behind the AI, supported by measures such as embedded digital signatures and designated AI handlers. ‘Explainability‘ requires organizations to provide clear, accessible reasoning for AI outputs, ensuring transparency and mitigating “black box” risks. ‘Responsibility‘ emphasizes that AI should primarily serve data principals’ interests, with documented justification of its necessity and proportionality. ‘Security‘ addresses risks beyond cybersecurity, covering potential physical, mental, or neurological harm, with mandatory testing, liability assurances, and insurance. Finally, ‘Ethics‘ extends fiduciaries’ duty to societal welfare, incorporating post-market monitoring and dynamic consent practices like “data fading.” Together, these principles form a comprehensive roadmap for the ethical, lawful, and accountable deployment of AI in India, aligned with international standards and emerging best practices.

VIII. Vendor Risk Management

Contracts with AI vendors require careful structuring. Beyond generic service-level agreements, fiduciaries must insist on explicit provisions for liability allocation, audit and inspection rights, and enforceable clauses around data residency and retention. These measures are critical to prevent scenarios where fiduciaries are left accountable for vendor breaches without recourse. Vendor due diligence must become a precondition, not an afterthought, of AI adoption.

No compliance framework can succeed without an informed user base. Staff, students, and employees must be trained on what data can safely be shared with AI systems and what must remain restricted. This extends beyond technical awareness to fostering a culture where users appreciate the ethical, contractual, and legal stakes involved in interacting with AI.

Finally, organizations must prepare for the inevitability of breaches. Any AI-related incident, particularly those involving vendors, should activate a well-documented incident response plan, including immediate notification to the Data Protection Board of India within statutory timelines. A proactive approach, rather than reactive scrambling, will be the difference between managed risk and regulatory sanction.

IX. IDPS 2025 and Beyond

The IDPS 2025 conference marks a critical milestone for fiduciaries navigating the convergence of data protection and artificial intelligence. By adopting the theme Bracing for Impact,” the event underscores the urgency of preparing organizations not only for the immediate enforcement of the DPDPA, but also for the disruptive challenges arising from the rapid adoption of AI technologies. For boards, compliance officers, and technology leaders, the message is clear – data governance must evolve beyond static compliance checklists into dynamic, risk-based strategies that anticipate both regulatory scrutiny and operational vulnerabilities.

One of the crucial and principal takeaways from the discussions is that fiduciaries cannot treat DPDPA and AI as isolated compliance obligations. The risks and responsibilities overlap, whether it is in the conduct of DPIAs, the structuring of vendor relationships, or the management of cross-border data transfers. Integration of compliance strategies across both dimensions, i.e. personal data protection and AI governance, is therefore not optional but imperative for resilience in an evolving regulatory environment.

The conference also highlighted the global ripple effects shaping Indian regulatory approaches. Developments such as the EU Data Act and the AI Act, along with sectoral guidance from the U.S. and OECD frameworks, are influencing Indian regulators to adopt a converging landscape of rules. This means fiduciaries operating in India will increasingly need to benchmark against international best practices, even when the statutory text of the DPDPA appears narrower in scope. Cross-border data operations, in particular, will need harmonized strategies to avoid regulatory arbitrage or conflict.

Finally, frameworks like DGPSI-AI, ISO 42001, and ISO 42005 were showcased as practical scaffolding for defensible compliance. These tools offer fiduciaries a way to translate broad statutory mandates into actionable processes, from conducting DPIAs to instituting accountability measures across supply chains. As IDPS 2025 makes clear, the road ahead will be one where compliance maturity is measured not just by adherence to law, but by an organization’s ability to demonstrate proactive, risk-aware governance in the age of AI.

X. Facing the Future through IDPS 2025

As IDPS 2025 convenes the data protection fraternity, it continues a tradition of being more than a technical forum — it is a space where law, policy, and practice converge to anticipate the future. The compounded challenge of the DPDPA and the disruptive force of AI is no longer theoretical; for fiduciaries, it is an everyday reality where compliance lapses and algorithmic opacity can collide, exposing institutions to legal, ethical, and reputational risk.

What makes this edition particularly forward-looking is its comparative orientation. European regulators are grappling with the interplay of GDPR and the AI Act, while the United States is experimenting with sectoral AI governance layered onto existing privacy rules. These global currents offer both lessons and warnings for India. By bringing international voices into dialogue with domestic stakeholders, IDPS 2025 situates Indian fiduciaries within a global compliance ecosystem rather than an insular one.

Framework DGPSI-AI may provide the scaffolding, but it is at IDPS that fiduciaries begin to cultivate the institutional culture of resilience that regulation alone cannot mandate. In this sense, IDPS 2025 is not merely about bracing for impact — it is about rehearsing a future where compliance, innovation, and trust are sustained in tandem.

Let us collaborate, engage in thoughtful deliberation, and embrace the emerging reality with confidence.

By Mr. M.G.Kodandaram, IRS.

Posted in Privacy | Leave a comment

Power of State Government to make laws for Electronic Documents

 

Consequent to the new Gaming Act passed by the Government of India, there is a pressure from the Gaming companies to persuade the State Governments to frame their own laws so that in the case against the Central law, an argument can be used that the power to make this law lies with the State and many states are already having such laws.

This is an attempt  to preserve the “Income” of the state politicians from running of these online betting and other illegal activities in the guise of online games.

This must be opposed.

The Online Gaming deals with  a “Game” that is run on a “Computer” or a “Computer like device”. ITA 2000 is the only law that defines the law of “Cyber Space”.

“Cyber Space” is an area of activity which is different from the physical space. A  State Government may have rights to regulate a game in physical space but it does not have powers to frame laws in the Cyber Space. Just as maritime zone, Satellite space,  Air space, the Spectrum etc are regulated under Central law, the “Electronic Gaming Space” is “Cyber Space” and does not come under the jurisdiction of the State Government.

In ITA 2000, Section 90 specifies:

Section 90: Power of State Government to make rules

(1) The State Government may, by notification in the Official Gazette, make rules to carry out the provisions of this Act.

(2) In particular, and without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely –

(a) the electronic form in which filing, issue, grant receipt or payment shall be effected under sub-section (1) of section 6;
(b) for matters specified in sub-section (2) of section 6;

(3) Every rule made by the State Government under this section shall be laid, as soon as may be after it is made, before each House of the State Legislature where it consists of two Houses, or where such Legislature consists of one House, before that House.

This power is only to make rules under the provisions of the Act and not to make new provisions applicable to cyber space.

“Cyber Space” is the space where “Binary Expressions” exist and interact with citizens and other “Binary Expressions”. In the time of AI and humanoid robots, we separately discuss whether “Binary Expressions” are limited to electronic documents only or extend to AI as juridical entities. However the fact remains that “binary expressions” create “Electronic documents” and they interact to produce the Gaming experience in the form of audio and video. The definition  of “Computer” in ITA 2000 extends to Gaming Consoles also.

Hence the Central Government should oppose the attempts of the gaming industry to challenge the Promotion and Regulation of Online Gaming Act (PROGA) on the grounds that this does not belong to the State jurisdiction under the Constitution.

States can pass laws for physical activity of gaming but not for gaming within a gaming console. If this is permitted, then the State Government will also have the jurisdiction to legislate processing of data within a computer or a mobile. State can say that since ISRO is physically located in Bengaluru, the data accessed in the computer systems at ISRO is under the legislative jurisdiction of the State. If IAF has a ground station and connects to the computing devices in the Airplanes or on ships etc., the relevant state Government may claim that that space also comes under the jurisdiction of the state.

To prevent such arguments, we need to clearly define that computers as a physical entity may exist in physical space but the electronic documents within the computer or on the Internet space are binary expressions  and come under the special legislative powers of the Central Government only.

Hence the State of Karnataka which is trying to pass a separate Gaming law at the corruptive push from the industry should restrain itself and not enter into this domain.

I request public spirited law firms in Karnataka to oppose this move through a PIL filed in the Karnataka High Court by impleading in the case filed by A 23

Naavi

Posted in Privacy | Leave a comment