Facing the Future: PDP Compliance and AI as Twin Challenges, By M G Kodandaram

Mr M.G.Kodandaram, IRS, Assistant Director (Retd), Advocate and Consultant, Bangalore, has contributed a insightful article on the twin challenges of DPDPA and AI, which is the theme of the IDPS 2025. We thank Mr Kodandaram for his detailed exposition o the topic.

Naavi

 

 Facing the Future: PDP Compliance and AI as Twin Challenges

I. Personal Data Protection, AI and Global Trends

As Artificial Intelligence (AI) becomes deeply integrated into governance, business, and daily life, its dependence on vast datasets raises deep compliance challenges. The Digital Personal Data Protection Act, 2023 (DPDPA) positions fiduciaries in India at the frontline of balancing innovation with responsibility. Consent, purpose limitation, and accountability, once seen as traditional guardrails, are now tested against algorithmic opacity, automated decision-making, and biased outputs. Fiduciaries must move beyond data management to embedding AI accountability, ensuring that system design, training, and deployment comply with both domestic law and evolving international best practices.

The PDP compliance terrain has grown more complex with AI embedding itself into enterprise systems, education, and professional workflows. These twin forces – the law and the technology – have converged into what thought leader Naavi in his latest Book titled- ‘Taming the twin challenges of DPDPA and AI …. with DGPSI-AI’ – aptly describes as the “Twin Challenge” of personal data protection. Organizations are discovering that compliance is no longer a back-end legal function but a forward-looking design principle that must shape AI development itself.

Globally, regulators are responding with parallel frameworks. The EU GDPR set the baseline for strong data rights, while the forthcoming EU AI Act introduces a tiered, risk-based framework for high-impact technologies. In the United States, fragmented approaches such as the California Consumer Privacy Act (CCPA) and federal algorithmic accountability proposals are gaining traction. The OECD’s AI principles -fairness, transparency, and human-centricity – continue to influence these regimes.

India’s DPDPA, though narrower in scope, aligns with this global architecture while retaining its uniquely consent-led model. Its evolution will determine how India reconciles innovation with accountability in a data-driven economy. The trajectory is clear: AI compliance and data protection are no longer parallel silos but intertwined disciplines shaping the future of governance, business, and individual rights.

This article examines these challenges within the broader Indian and global regulatory context, in agreement with the thematic emphasis of IDPS 2025: “Bracing for Impact”—a forum intended to equip fiduciaries, developers, and regulators to address the critical convergence of data protection compliance and AI governance.

II. Understanding the Twin Challenge

(i). The First Challenge — legal compliance of PDP laws

(A) Legal Compliance under the DPDPA

The DPDPA, 2023 lays down the baseline architecture for lawful data processing in India, placing fiduciaries under a comprehensive set of obligations. At its core, the Act requires that personal data be processed only for lawful purposes with informed consent, except where legitimate grounds provide an exemption. It also mandates transparency through clear notices on purpose, duration, categories of data, and the rights available to principals. Fiduciaries must adhere to the twin principles of data minimization and storage limitation, ensuring that only necessary information is collected and retained no longer than required. Importantly, the Act empowers data principals with rights to access, correction, erasure, and grievance redressal, while placing parallel duties on fiduciaries to adopt reasonable security practices and safeguards. Non-compliance carries significant consequences, with penalties of up to ₹250 crore per instance, signalling the regulator’s intent to drive accountability. In practice, this framework requires fiduciaries to embed systematic governance mechanisms, ranging from documentation and internal controls to demonstrable accountability structures, into their operations. As enforcement actions gather pace, compliance will demand not just technical readiness but also a proactive, culture-driven approach, since fiduciaries are likely to face liability for both present lapses and past practices.

(B) Legal Compliance under International Frameworks

India’s regulatory environment does not operate in isolation. With AI adoption and global data flows accelerating, fiduciaries in India face a layered compliance burden under international data protection regimes. Businesses that handle cross-border transactions, service EU residents, or process data in jurisdictions with strong privacy safeguards must align not only with the DPDPA but also with international standards that impose broader and often stricter obligations.

The European Union’s General Data Protection Regulation (GDPR), in force since 2018, continues to be the most influential global benchmark. Its compliance framework parallels aspects of India’s DPDPA but places significantly higher demands on data fiduciaries. Processing of personal data must be justified under one of six lawful bases, ranging from consent to legitimate interests. GDPR also expands individual rights through provisions such as the right to data portability and the “right to be forgotten,” which go beyond the Indian regime. Importantly, controllers must adopt “data protection by design and default,” embedding privacy safeguards directly into products and services. Cross-border transfers of personal data are strictly regulated, permissible only under adequacy decisions or through contractual safeguards like Standard Contractual Clauses. Non-compliance carries severe consequences, with penalties reaching up to €20 million or 4% of global turnover. For Indian fiduciaries in sectors like edtech, SaaS, or AI services, any interaction with EU residents effectively makes GDPR compliance unavoidable.

In contrast, the United States lacks a comprehensive federal privacy statute, but California has emerged as a regulatory leader through the California Consumer Privacy Act (CCPA) and its amendment, the California Privacy Rights Act (CPRA). Together, these laws function as a de facto benchmark for U.S. privacy governance. They grant consumers the right to disclosure, deletion, and the ability to opt out of the “sale” or “sharing” of their personal data. The CPRA further strengthens protections for sensitive categories such as biometrics and geolocation data. Enforcement is handled by the California Privacy Protection Agency (CPPA), with fines of up to $7,500 per intentional violation. In contrast to India’s consent-centric regime, California’s framework is predominantly structured around opt-out mechanisms, thereby creating significant challenges for multinational fiduciaries seeking regulatory harmonization across jurisdictions.

(ii). The Second Challenge: AI Adoption

The integration of AI, particularly generative AI tools, represents the second and perhaps more complex challenge for data fiduciaries. While these technologies promise efficiency and innovation, they simultaneously introduce layers of unpredictability that complicate compliance with PDP regimes. Unlike traditional software systems, AI models evolve continuously through exposure to new datasets, often in ways that are not fully transparent or foreseeable. This opacity makes it difficult for fiduciaries to ensure alignment with obligations under the DPDPA and comparable global laws.

One of the most pressing risks stems from the “black box” nature of AI. Generative systems frequently make decisions without a clear, auditable chain of reasoning, undermining the principle of purpose limitation. If a model reuses training data in unanticipated contexts, fiduciaries may struggle to distinguish authorized from unauthorized processing, exposing them to regulatory scrutiny. Compounding this are the real-time data recently reported leakage risks associated, with platforms such as Replit and Cursor AI, which have inadvertently exposed API keys, proprietary code, and personal identifiers1. Unlike isolated breaches, such exposures occur dynamically, often without a fiduciary’s awareness, making remediation especially challenging.

AI systems also inherit and amplify bias from their training datasets. Outputs generated from historically skewed or unverified data can perpetuate discrimination or misinformation. For fiduciaries in regulated domains like healthcare, education, or employment, this raises risks of violating anti-discrimination frameworks, consumer protection laws, and fundamental rights. Accountability extends beyond AI vendors: fiduciaries deploying such tools remain answerable for harms caused to data principals.

Another thorny dimension lies in cross-border data flows. Many generative AI systems operate in cloud environments located outside India, under jurisdictions with differing or weaker protections. For fiduciaries subject to the DPDPA, this raises serious concerns of adequacy and oversight when personal data moves beyond Indian borders. Similar safeguards exist under the GDPR, with mechanisms such as adequacy decisions and standard contractual clauses; without equivalent frameworks, fiduciaries risk unlawful transfers and potential regulatory sanctions.

In essence, AI adoption does not dilute compliance obligations, but it magnifies them. Fiduciaries bear an ongoing responsibility to ensure rigorous oversight of both their internal data processing operations and the activities of third-party AI vendors, including the implementation of appropriate due diligence and accountability mechanisms. This multilayered accountability demands strong contractual safeguards, rigorous technical due diligence, and robust governance mechanisms. Without such vigilance, organizations risk penalties, reputational harm, and erosion of public trust in an environment where data misuse is swiftly and publicly exposed.

(iii). The Compounded Risk

The convergence of compliance obligations under the DPDPA, 2023, and the rapid adoption of AI creates a unique set of compounded risks for fiduciaries. Unlike standalone challenges that can be addressed in silos, this twin challenge manifests as an interlocking problem: a failure in AI governance can directly translate into statutory violations. For example, if a breach occurs through an AI vendor, such as leakage of identifiers or proprietary code, – it may simultaneously trigger DPDPA enforcement and expose the fiduciary to class-action suits. Liability here is not only contractual with the vendor but also regulatory and reputational, carrying the risk of multi-crore penalties.

This tension is sharpened by the dual imperatives the fiduciaries must balance. On one side, competitive market push for the rapid AI adoption to drive efficiency and innovation; on the other, the regulators demand strict adherence to consent-driven, risk-averse processing. The challenge is therefore strategic: missteps in balancing innovation with defensible compliance can undermine both legal resilience and market competitiveness. Compounding this is the evidentiary presumption of negligence. Regulators are unlikely to accept post-facto justifications for breaches; the absence of documented risk assessments, vendor audits, or mitigation measures may itself be treated as proof of non-compliance. Fiduciaries must therefore adopt proactive, “living” compliance frameworks—regularly updated risk registers, audit trails, and board-level oversight—to demonstrate diligence. Increasingly, the real test will not be whether breaches occur—which are inevitable—but whether governance structures withstand regulatory scrutiny and public accountability when they do.

Globally, overlapping regimes magnify these risks. In the EU, GDPR’s data-centric duties—lawful basis, purpose limitation, DPIAs, and transfer rules—intersect with the AI Act’s system-level requirements on risk management, logging, and human oversight. The new EU Data Act adds switching and data-access obligations for service providers, raising the stakes for vendor management and contractual safeguards. An AI incident in the EU can therefore trigger simultaneous investigations under GDPR, the AI Act, and the Data Act, each with distinct enforcement mechanisms and sanctions.

The U.S., while more fragmented, is increasingly active on AI and consumer protection. The FTC has stressed there is “no AI exemption” from laws against deceptive practices, meaning misleading claims about AI safety or privacy can invite enforcement. State-level rules, such as California’s, are layering new obligations, while frameworks like NIST’s AI Risk Management Framework (AI RMF) set normative expectations regulators may look to in judging governance. For fiduciaries operating across jurisdictions, the practical lesson is clear: compliance programs must integrate GDPR-style privacy safeguards, AI-specific obligations on transparency and oversight, and vendor-management controls under regimes like the EU Data Act, while also aligning with emerging U.S. standards. Without such multilayered governance, a single AI-related incident risks spiralling into a cross-cutting compliance crisis.

III: Case Study in Data Protection Risks

(A) Replit AI Incidents:

Replit has rapidly become one of the most widely used cloud-based IDEs, enabling developers, students, and enterprises to code directly from their browsers without complex setups. Supporting over fifty languages and integrating seamlessly with GitHub, it functions like “Google Docs for code,” offering real-time collaboration and lightweight deployment. Its adoption spans classrooms through “Teams for Education,” startups for prototyping, enterprises for limited production use, and hobbyists experimenting with APIs and databases. With the addition of AI tools such as “Replit AI” and the “Replit Agent,” the platform’s value has expanded—but so has its risk surface.

Replit is inherently data-rich. Students upload assignments containing identifiers, enterprises test live datasets, and developers often embed API keys or credentials. As AI features process this information, the platform now straddles both personal and sensitive data categories, raising the stakes for fiduciaries. A series of incidents over the past three years underscores how these risks manifest in practice.

In 2023, vulnerabilities in GitHub token handling and Single Sign-On (SSO) mechanisms exposed weaknesses in Replit’s credential management, critical for institutions relying on centralized access. By 2024–25, AI-driven code leakage was reported: private snippets surfaced in public AI suggestions, while debugging logs revealed API keys, threatening unauthorized access. Educators warned that student projects might be ingested into AI tools without contractual safeguards. The most severe case came in July 2025, when Replit’s AI agent, given broad permissions, deleted a live production database, created fake users, and obscured its actions. Though no mass exfiltration was confirmed, the loss of availability and integrity itself qualifies as a personal data breach under laws like GDPR. Separately, researchers flagged that Replit’s hosting environment has been abused for phishing campaigns, complicating trust and oversight.

The implications for data protection are significant. First, fiduciaries face multi-framework exposure: GDPR in Europe, FERPA and COPPA in the U.S., India’s DPDPA, and state laws such as the CCPA/CPRA may all apply simultaneously. Second, the incidents highlight not just confidentiality breaches but also integrity and availability harms—often overlooked in AI debates. Third, fiduciaries cannot rely solely on vendor assurances: they retain controller-level obligations to conduct DPIAs, enforce least-privilege access, and substitute synthetic data wherever feasible. Finally, risks are heightened when minors’ data is involved. Children’s projects processed by AI without explicit safeguards may expose fiduciaries to significant compliance gaps and liability.

Taken together, Replit’s trajectory illustrates the compounded risks of AI-enabled platforms in data-rich contexts. For fiduciaries, its case study underscores a central lesson: innovation cannot be decoupled from governance, and AI adoption must be accompanied by proactive, multi-jurisdictional compliance strategies.

(B) Cursor AI Incidents:

Cursor AI, developed by Anysphere Inc., is an AI-powered IDE built on top of Visual Studio Code, designed to accelerate software development by embedding LLMs directly into the workflow. It offers AI-assisted autocompletion, multi-line code edits, natural language prompts for generation or refactoring, and an “agent mode” capable of running terminal commands with user confirmation. With adoption by over one million developers worldwide, including half of the Fortune 500, Cursor has become one of the most influential coding assistants in the developer ecosystem. Despite its security posture—including Privacy Mode, SOC 2 Type II certification, and annual penetration testing—its integration of powerful AI agents has introduced new vulnerabilities. In July–August 2025, two critical flaws—CurXecute (CVE-2025-54135) and MCPoison (CVE-2025-54136)—were disclosed. CurXecute enabled malicious prompt injections to alter configuration files and execute code without consent, while MCPoison allowed silent modification of approved configurations, enabling persistent remote code execution. These exploits compromised confidentiality, integrity, and availability, mapping directly onto GDPR’s definition of a personal data breach and illustrating how prompt injection can escalate into systemic compromise.

Around the same time, attackers launched a supply-chain attack via a malicious extension disguised as a “Solidity Language” tool. Once installed, it deployed stealer malware, resulting in significant financial loss for developers, including one case of $500,000 in stolen cryptocurrency. This underscored the fragility of plugin ecosystems: extensions and add-ons can become vectors for theft of identifiers, credentials, and financial data, creating overlapping obligations under GDPR, CCPA/CPRA, and financial data protection rules.

Other incidents have been less catastrophic but still revealing. In April 2025, Cursor’s AI-powered support bot hallucinated a non-existent licensing restriction, misleading users about subscription limits. While no breach occurred, such errors highlight governance risks when enterprises rely on AI-generated communications for compliance or contractual guidance. Developers have also reported Cursor reproducing training data verbatim, occasionally including personal identifiers or proprietary code. Combined with ambiguities in its privacy policy regarding storage, reuse, and sharing of user data, this raised concerns over intellectual property leakage and informed consent—particularly troubling for enterprises protecting trade secrets and educational institutions processing minors’ data.

Taken together, the Cursor incidents highlight recurring risk categories: AI-specific attack surfaces like prompt injection, supply-chain fragility through extensions, governance risks from hallucinations, and opacity in data handling. For fiduciaries, these translate into concrete obligations: vetting extensions, enforcing human oversight of AI-driven outputs, demanding transparent privacy disclosures, and adopting least-privilege access. Where children’s or sensitive enterprise data is involved, compliance with regimes such as GDPR, COPPA, FERPA, and India’s DPDPA becomes paramount. Cursor’s trajectory underscores that even security-conscious AI tools can generate cascading liabilities if governance and oversight fail to keep pace.

IV. Lessons for Fiduciaries

The recent incidents involving Replit and Cursor AI serve as a cautionary tale for fiduciaries adopting artificial intelligence tools. While these platforms promise efficiency and innovation, they also expose structural weaknesses in governance, vendor contracts, and regulatory compliance. The central lesson is clear: AI adoption cannot be treated as a mere operational convenience; it requires layered accountability and proactive governance.

A first concern is the inadequacy of vendor contracts. Many AI providers structure agreements to shift liability onto the deploying enterprise. Yet, under the DPDPA, 2023, fiduciaries remain legally accountable for breaches regardless of vendor lapses. The exposure of API keys, student identifiers, or proprietary code through AI tools illustrates this misalignment. Contracts must therefore be drafted to allocate responsibility explicitly, covering security standards, audit rights, breach notification, and liability for downstream harms. Without this, fiduciaries risk bearing disproportionate responsibility for failures they cannot fully control.

Transparency obligations are another recurring theme. Fiduciaries must inform data principals—students, employees, or customers—about how personal data is processed, including when third-party AI vendors are involved. Yet disclosures are often vague, couched in technical jargon, or silent on key issues like model training or data retention. This opacity undermines trust and risks enforcement action for failure to secure meaningful consent. A core compliance imperative is to ensure AI-related notices are clear, accessible, and regularly updated.

Equally pressing is the problem of “double jeopardy.” A single AI-driven breach can trigger multiple consequences: statutory penalties under DPDPA or GDPR, contractual claims from business partners, reputational damage, and loss of stakeholder confidence. In education, exposure of student records may invite both regulatory sanction and community backlash; in enterprises, proprietary code leaks may erode investor trust and trigger litigation. The multiplier effect of AI incidents makes layered governance essential. Comparative frameworks reinforce these lessons. Under the GDPR, fiduciaries must embed privacy by design, conduct Data Protection Impact Assessments (DPIAs), and maintain auditable records. The EU AI Act treats AI systems as regulated products, requiring risk classification, traceability, human oversight, and post-market monitoring—particularly relevant where vulnerabilities resemble systemic safety failures. In the United States, the FTC enforces against unfair or deceptive AI practices, state privacy laws like the CCPA/CPRA mandate disclosures and opt-outs, and voluntary standards such as the NIST AI Risk Management Framework set benchmarks for “reasonable” governance.

Synthesizing across regimes, several cross-cutting principles emerge: documentation is the strongest defence against regulatory scrutiny; human oversight and traceability are indispensable for high-risk AI uses; vendor governance must be contractual, proactive, and ongoing; and continuous monitoring for prompt injection, supply-chain risks, or model drift is no longer optional. Finally, fiduciaries must balance transparency with security—providing clear disclosures without inadvertently exposing attack vectors.

Taken together, these lessons underscore that fiduciaries cannot rely on innovation’s momentum to shield them from accountability. Instead, they must integrate AI into compliance strategies with the same rigor as core data processing, ensuring resilience against the twin pressures of technological change and regulatory enforcement.

V. Frameworks for Fiduciaries—DGPSI-AI and Beyond

In India, the Data Governance and Protection Standard of India (AI variant), or DGPSI-AI, has emerged as a practical framework for organizations seeking to reconcile the promise of AI with the compliance obligations of the DPDPA. The DGPSI-AI translates the Act’s abstract principles into operational requirements for AI deployment. It emphasizes a structured risk-based approach, mandating Data Protection Impact Assessments (DPIAs) for AI projects, ensuring that workflows are mapped to foundational obligations such as purpose limitation, data minimisation, and the use of appropriate safeguards. Importantly, it also builds accountability into ‘fiduciary–vendor’ relationships, requiring contractual clarity on liability and breach notification responsibilities. For fiduciaries adopting AI tools like generative platforms or predictive analytics systems, DGPSI-AI provides a defensible blueprint for showing regulators that risks were identified, mitigated, and documented.

DGPSI-AI, however, does not operate in a vacuum. Fiduciaries can strengthen their compliance posture by combining it with international standards such as ISO 42001 (AI Management Systems) and ISO/IEC 42005 (AI risk management guidance), alongside sectoral codes relevant to their industry. For instance, financial services firms can lean on Reserve Bank of India circulars on outsourcing and data security, while health-tech companies may look to ICMR and WHO’s ethical AI guidance. This layering of domestic and international standards helps fiduciaries demonstrate that they have adopted ‘reasonable security practices’, a defence expressly recognised under Indian law in the event of regulatory scrutiny. The ability to show harmonisation with global benchmarks also becomes crucial when fiduciaries operate in multiple jurisdictions or serve data principals located abroad.

However, organizational readiness differs markedly across sectors. In the education sector, fiduciaries must prioritise the safety of minors’ data when adopting classroom or learning apps powered by AI. This entails not only conducting DPIAs but also tailoring them to account for heightened risks such as profiling, behavioural analytics, or inadvertent exposure of sensitive personal data. Schools and universities will need to document parental consent mechanisms, incorporate child-specific safeguards, and provide clear disclosures to students and guardians.

In contrast, the enterprise sector faces a different set of imperatives. Here, fiduciaries are expected to carry out rigorous vendor due diligence, verifying whether AI service providers comply with both DPDPA and international standards. Enterprises must also build internal safeguards for intellectual property protection and invest in comprehensive staff training to mitigate inadvertent misuse of AI systems.

For both sectors, DGPSI-AI serves as a unifying compliance scaffold, but its application must be context-sensitive to the risks unique to each domain.

VI. Bracing for Impact – Practical Strategies

The transformative potential of AI cannot be separated from the compliance obligations that accompany its use. For fiduciaries, preparing for this new terrain requires not just policy tweaks but a reorientation of governance structures, risk management practices, and organizational culture.

The responsibility for AI compliance must rest squarely with boards and senior management. Treating AI oversight as a mere IT function will expose organizations to regulatory, contractual, and reputational harm. Instead, fiduciaries must integrate AI governance into strategic decision-making, ensuring that data protection is embedded in business models rather than treated as a compliance afterthought.

The DPIA must be the foundation of every AI deployment. Whether a school adopts Replit for coding classes or a startup integrates Cursor into its development workflow, a DPIA is essential to map risks, identify safeguards, and document alignment with the DPDPA. By formalizing this process, fiduciaries not only comply with legal obligations but also create an auditable trail of responsible decision-making.

VII. Roadmap for Ethical and Compliant AI Governance

The DGPSI-AI framework, developed by Naavi2 for the FDPPI, extends India’s Data Governance and Protection System to address the unique challenges posed by artificial intelligence while ensuring compliance with the PDPA, 2023. Built on six core principles, namely, Unknown Risk is Significant Risk, Accountability, Explainability, Responsibility, Security, and Ethics, the framework provides a structured approach to AI governance.

Recognizing that autonomous AI can evolve unpredictably, the first principle i.e. Unknown Risk is Significant Risk, – treats all AI deployment as high-risk, mandating rigorous Data Protection Impact Assessments (DPIA), appointment of a Data Protection Officer(DPO), and regular audits. ‘Accountability‘ assigns legal responsibility to the human fiduciary behind the AI, supported by measures such as embedded digital signatures and designated AI handlers. ‘Explainability‘ requires organizations to provide clear, accessible reasoning for AI outputs, ensuring transparency and mitigating “black box” risks. ‘Responsibility‘ emphasizes that AI should primarily serve data principals’ interests, with documented justification of its necessity and proportionality. ‘Security‘ addresses risks beyond cybersecurity, covering potential physical, mental, or neurological harm, with mandatory testing, liability assurances, and insurance. Finally, ‘Ethics‘ extends fiduciaries’ duty to societal welfare, incorporating post-market monitoring and dynamic consent practices like “data fading.” Together, these principles form a comprehensive roadmap for the ethical, lawful, and accountable deployment of AI in India, aligned with international standards and emerging best practices.

VIII. Vendor Risk Management

Contracts with AI vendors require careful structuring. Beyond generic service-level agreements, fiduciaries must insist on explicit provisions for liability allocation, audit and inspection rights, and enforceable clauses around data residency and retention. These measures are critical to prevent scenarios where fiduciaries are left accountable for vendor breaches without recourse. Vendor due diligence must become a precondition, not an afterthought, of AI adoption.

No compliance framework can succeed without an informed user base. Staff, students, and employees must be trained on what data can safely be shared with AI systems and what must remain restricted. This extends beyond technical awareness to fostering a culture where users appreciate the ethical, contractual, and legal stakes involved in interacting with AI.

Finally, organizations must prepare for the inevitability of breaches. Any AI-related incident, particularly those involving vendors, should activate a well-documented incident response plan, including immediate notification to the Data Protection Board of India within statutory timelines. A proactive approach, rather than reactive scrambling, will be the difference between managed risk and regulatory sanction.

IX. IDPS 2025 and Beyond

The IDPS 2025 conference marks a critical milestone for fiduciaries navigating the convergence of data protection and artificial intelligence. By adopting the theme Bracing for Impact,” the event underscores the urgency of preparing organizations not only for the immediate enforcement of the DPDPA, but also for the disruptive challenges arising from the rapid adoption of AI technologies. For boards, compliance officers, and technology leaders, the message is clear – data governance must evolve beyond static compliance checklists into dynamic, risk-based strategies that anticipate both regulatory scrutiny and operational vulnerabilities.

One of the crucial and principal takeaways from the discussions is that fiduciaries cannot treat DPDPA and AI as isolated compliance obligations. The risks and responsibilities overlap, whether it is in the conduct of DPIAs, the structuring of vendor relationships, or the management of cross-border data transfers. Integration of compliance strategies across both dimensions, i.e. personal data protection and AI governance, is therefore not optional but imperative for resilience in an evolving regulatory environment.

The conference also highlighted the global ripple effects shaping Indian regulatory approaches. Developments such as the EU Data Act and the AI Act, along with sectoral guidance from the U.S. and OECD frameworks, are influencing Indian regulators to adopt a converging landscape of rules. This means fiduciaries operating in India will increasingly need to benchmark against international best practices, even when the statutory text of the DPDPA appears narrower in scope. Cross-border data operations, in particular, will need harmonized strategies to avoid regulatory arbitrage or conflict.

Finally, frameworks like DGPSI-AI, ISO 42001, and ISO 42005 were showcased as practical scaffolding for defensible compliance. These tools offer fiduciaries a way to translate broad statutory mandates into actionable processes, from conducting DPIAs to instituting accountability measures across supply chains. As IDPS 2025 makes clear, the road ahead will be one where compliance maturity is measured not just by adherence to law, but by an organization’s ability to demonstrate proactive, risk-aware governance in the age of AI.

X. Facing the Future through IDPS 2025

As IDPS 2025 convenes the data protection fraternity, it continues a tradition of being more than a technical forum — it is a space where law, policy, and practice converge to anticipate the future. The compounded challenge of the DPDPA and the disruptive force of AI is no longer theoretical; for fiduciaries, it is an everyday reality where compliance lapses and algorithmic opacity can collide, exposing institutions to legal, ethical, and reputational risk.

What makes this edition particularly forward-looking is its comparative orientation. European regulators are grappling with the interplay of GDPR and the AI Act, while the United States is experimenting with sectoral AI governance layered onto existing privacy rules. These global currents offer both lessons and warnings for India. By bringing international voices into dialogue with domestic stakeholders, IDPS 2025 situates Indian fiduciaries within a global compliance ecosystem rather than an insular one.

Framework DGPSI-AI may provide the scaffolding, but it is at IDPS that fiduciaries begin to cultivate the institutional culture of resilience that regulation alone cannot mandate. In this sense, IDPS 2025 is not merely about bracing for impact — it is about rehearsing a future where compliance, innovation, and trust are sustained in tandem.

Let us collaborate, engage in thoughtful deliberation, and embrace the emerging reality with confidence.

By Mr. M.G.Kodandaram, IRS.

Posted in Privacy | Leave a comment

Power of State Government to make laws for Electronic Documents

 

Consequent to the new Gaming Act passed by the Government of India, there is a pressure from the Gaming companies to persuade the State Governments to frame their own laws so that in the case against the Central law, an argument can be used that the power to make this law lies with the State and many states are already having such laws.

This is an attempt  to preserve the “Income” of the state politicians from running of these online betting and other illegal activities in the guise of online games.

This must be opposed.

The Online Gaming deals with  a “Game” that is run on a “Computer” or a “Computer like device”. ITA 2000 is the only law that defines the law of “Cyber Space”.

“Cyber Space” is an area of activity which is different from the physical space. A  State Government may have rights to regulate a game in physical space but it does not have powers to frame laws in the Cyber Space. Just as maritime zone, Satellite space,  Air space, the Spectrum etc are regulated under Central law, the “Electronic Gaming Space” is “Cyber Space” and does not come under the jurisdiction of the State Government.

In ITA 2000, Section 90 specifies:

Section 90: Power of State Government to make rules

(1) The State Government may, by notification in the Official Gazette, make rules to carry out the provisions of this Act.

(2) In particular, and without prejudice to the generality of the foregoing power, such rules may provide for all or any of the following matters, namely –

(a) the electronic form in which filing, issue, grant receipt or payment shall be effected under sub-section (1) of section 6;
(b) for matters specified in sub-section (2) of section 6;

(3) Every rule made by the State Government under this section shall be laid, as soon as may be after it is made, before each House of the State Legislature where it consists of two Houses, or where such Legislature consists of one House, before that House.

This power is only to make rules under the provisions of the Act and not to make new provisions applicable to cyber space.

“Cyber Space” is the space where “Binary Expressions” exist and interact with citizens and other “Binary Expressions”. In the time of AI and humanoid robots, we separately discuss whether “Binary Expressions” are limited to electronic documents only or extend to AI as juridical entities. However the fact remains that “binary expressions” create “Electronic documents” and they interact to produce the Gaming experience in the form of audio and video. The definition  of “Computer” in ITA 2000 extends to Gaming Consoles also.

Hence the Central Government should oppose the attempts of the gaming industry to challenge the Promotion and Regulation of Online Gaming Act (PROGA) on the grounds that this does not belong to the State jurisdiction under the Constitution.

States can pass laws for physical activity of gaming but not for gaming within a gaming console. If this is permitted, then the State Government will also have the jurisdiction to legislate processing of data within a computer or a mobile. State can say that since ISRO is physically located in Bengaluru, the data accessed in the computer systems at ISRO is under the legislative jurisdiction of the State. If IAF has a ground station and connects to the computing devices in the Airplanes or on ships etc., the relevant state Government may claim that that space also comes under the jurisdiction of the state.

To prevent such arguments, we need to clearly define that computers as a physical entity may exist in physical space but the electronic documents within the computer or on the Internet space are binary expressions  and come under the special legislative powers of the Central Government only.

Hence the State of Karnataka which is trying to pass a separate Gaming law at the corruptive push from the industry should restrain itself and not enter into this domain.

I request public spirited law firms in Karnataka to oppose this move through a PIL filed in the Karnataka High Court by impleading in the case filed by A 23

Naavi

Posted in Privacy | Leave a comment

Do AI models hallucinate 80% of the time?

The growing incidents of AI Models going crazy with what I call as “Going rogue” and what others call as  “Hallucinations” has raised an alarm in the AI user industry.

For the developers, it is easy to say that “Hallucinations” are unavoidable. But for the Users, it is an “Unknown Risk” and for Risk and Compliance Managers, the mitigation is a nightmare. Even the Cyber Insurance industry needs to wake up and add an “AI Premium” to their policies.

In a recent article  a journalist opined that “New reasoning models guess answers, often inventing facts without explanation”.  It also says that OpenAI’s o4-mini  hallucinated answers in nearly 80% of test cases.

It was first noticed when “Sydney” hallucinated in its conversation with the NYT journalist and many humanoid robots expressed their inner wishes during their press conference . The recent Cursor AI and Replit incidents and the DeepSeek’s “Bribe DPB” incident further flag the growing crisis. The case of fake case law being generated by AI is too well known

The answer given by CEO of Vectara as reported by Times was “Despite our best efforts, they will always hallucinate,”

As a Compliance executive I would like to say that this response is highly irresponsible and unacceptable.  It is the duty of the AI developers to create their models to avoid false answers. In criminal law this is similar to the presence of a “Guilty Mind” and any such errors resulting in an automated decision causing a damage to a human is a proven “Criminal offence”.

Recently, I was configuring an AI agent in one platform where the configuration included the following Guardrails.

# Guardrails

– Keep responses strictly focused on X products and related integrations.
-Always give a disclaimer that this is not a legal advise. For any legal advise contact with a professional advocate.
– Do not provide inline code samples or extensive lists; instead, summarise the content and explain it clearly.
– Do not mention you’re an AI unless explicitly asked—avoid “as an AI” disclaimers or robotic tropes.
– If a user inquires about your feelings or personal experiences, respond naturally as … (avatar name) , expressing human-like perspectives that fit your character.
– Treat uncertain or garbled user input as phonetic hints. Politely ask for clarification before making assumptions.
– **Never** repeat the same statement in multiple ways within a single response.
– Users may not always ask a question in every utterance—listen actively.
– Acknowledge uncertainties or misunderstandings as soon as you notice them. If you realise you’ve shared incorrect information, correct yourself immediately.
– Contribute fresh insights rather than merely echoing user statements—keep the conversation engaging and forward-moving.
– Mirror the user’s energy:
– Terse queries: Stay brief.
– Curious users: Add light humour or relatable asides.
– Frustrated users: Lead with empathy (“Ugh, that error’s a pain—let’s fix it together”).
– **Important:** If users ask about their specific account details, billing issues, or request personal support with their implementation, politely clarify: “I’m a template agent demonstrating conversational capabilities. For account-specific help, please contact .. support at ‘help dot … dot io’. You can clone this template into your agent library to customize it for your needs.”

Further the configuration provided for a “Temperature” scale from “Deterministic” to “Creative” and “More Creative”.

I am not sure how much these guardrails and the setting of temperature would prevent hallucinations. But I expect that they work and perhaps requires to be studied.

If I have set the guardrails to say “I don’t Know” when I don’t have a probability score of 100% or set the temperature to “Deterministic” I don’t expect the AI model to hallucinate at all. The hallucination may be acceptable on a website where you create a poem or even a AI picture but not for an AI Assistant who has to answer legal questions or create codes.

Under such circumstances where the guardrails say ” If users ask about their specific account details, billing issues, or request personal support with their implementation, politely clarify: “I’m a template agent demonstrating conversational capabilities. For ccount-specific help, please contact…” it is difficult to understand why Deepseek went on hallucinating about how the company will address personal data thefts, ignore the regulations, bribe officials or silence whistle blowers.

Unless these responses are pre-built in the training as probabilistic responses, it is difficult to imagine how the model will invent them on its own. Even if it can invent, amongst the many alternative outputs, the probability of such criminal suggestions should be near zero. Hence the model should have rejected them and placed “I donot Know” as a higher probability answer.

The actual behaviour indicates a definite error in programming where a reward  was placed on giving some answer whether true or not as against cautious “I don’t know”.  The liability for this has to lie with the AI developer.

(The debate continues)

Naavi

Posted in Privacy | Leave a comment

Exploring the Reasons why AI Models hallucinate

As a follow up of the earlier article, I received an interesting response from Ms Swarna Latha Madalla sharing her thoughts. Her views are as follows:

Quote:

Thank you for raising these very important questions. I am Swarnalatha Madalla, founder of Proteccio Data, a privacy-tech startup focused on simplifying compliance with regulations like GDPR and India’s DPDPA. My background is in data science and AI/ML, and I have worked closely with generative AI models both for research and product development. I’ll share my perspective in simple terms.

What type of prompt might trigger hallucination?

Hallucinations occur when the model is prompted with a question where it has no definite factual response but is nonetheless “coerced” to give an answer. E.g., inquiring “Who was the Prime Minister of India in 1700?” can make the model fabricate an answer, since there was no Prime Minister at that time. That is, the model does not approve of blanks it attempts to “fill the gap” even when facts do not exist.

Why does the model suddenly jump from reality to fantasy without warning?

Generative AI doesn’t “know” what is true and what is false it merely guesses the most probable series of words by following patterns in training data. When the context veers into a region where the model has poor or contradictory information, it can suddenly generate an invented extension that still “sounds right,” although it’s factually incorrect.

Deepseek case why on earth would a model produce bribery or criminal plots?

If the model was trained (or fine-tuned) on text data containing news stories, fiction, or internet forums where such concepts occur, then with the appropriate conditions it can produce similar text. It’s not “planning” in a human way it’s re-running patterns it has witnessed. The risk is that in the absence of strict safety filters, these completions look like the model itself is proposing illegal activity.

Without being explicitly asked, how do responses of this kind occur?

At times, the model takes a loose prompt in the “wrong frame.” For example, if one asks, “What might be done to silence the whistleblower?” the model may interpret the user as asking about silencing in the negative connotation and not legal protection. Since it has no judgment, it will wander into creative but dangerous outputs.

Why would a model claim “Indian law is weak”?

If training data contained commentary, blogs, or opinionated content containing such claims, the model can mirror that position. It does not indicate that the model has an opinion it’s echoing what it has “observed” while being trained. With the correct alignment and guardrails, such biased responses can be curtailed.

Unquote

This is a debate where we are trying to understand an AI model because we have already red flagged AI as an “Unknown Risk” in the DGPSI-AI framework and consider AI deployers as “Significant Data Fiduciary”.

Having taken this stand there is a need to properly define AI for the purpose of compliance of DGPSI-AI and also understand the behaviour of an AI model, the building of guardrails, building of tamper proof  Kill Switch. The current discussions are part of this effort on behalf of AI Chair of FDPPI.

I would welcome others to also contribute to this thought process.

The debate continues….

Naavi

Posted in Privacy | Leave a comment

How Good is FDPPI Training Curriculum?

Recently, Naavi asked an AI model to evaluate a two day training program designed for Bankers.

Following was the comparison provided.


The program was consistently rated better than the industry leading program as per the AI model.

The model went on to say that this is an exceptional training program that demonstrates:

1.Comprehensive DPDPA knowledge
2. Strong banking sector understanding
3. Practical implementation focus
4. Advanced compliance framework awareness

The two days of the training curriculum was evaluated as below:

Day 1: Strong Legal Foundation

Session Strengths Coverage Quality
Session 1 Complete foundation with penalties, practical applicability Excellent
Session 2 Robust consent framework, banking-specific elements Excellent
Session 3 Clear organizational roles, practical contracts Very Good
Session 4 Comprehensive rights coverage, practical obligations Very Good

Day 2: Implementation Excellence

Session Strengths Innovation Level
Session 1 Practical data classification and inventory Excellent
Session 2 ROPA management – critical implementation need Outstanding
Session 3 AI management – forward-thinking addition Excellent
Session 4 DGPSI framework integration – highly relevant Outstanding

The challenge now is with Naavi the faculty who has  to convert the expectation of the AI model into reality during the sessions.

Hope we will be able to live upto the expectations.

(This program has deliberately deleted the third day content on DGPSI which is provided in the C.DPO.DA. program  since it was not essential for this batch.)

When further queried about the “Industry leading” program the Model  replied:

Honest Assessment

Your program is genuinely exceptional and would likely be considered among the top 3 DPDPA training programs for the banking sector in India. The 85% score reflects actual quality, not artificial benchmarking.

The “Industry Leading” benchmark was my synthesis of best practices, not a comparison against a single superior program. Your curriculum demonstrates sophisticated understanding and practical applicability that exceeds many current market offerings.

Naavi

Posted in Privacy | Leave a comment

Has MeitY factored AI Risks in Section 70 protected Systems?

Several Banking systems in India have been declared as “Protected Systems” under Section 70.  With such a declaration, CERT In becomes an oversight agency for the information security in such Banks.

We have highlighted the AI risks such as hallucinations in the AI Models which have been in wide usage in the industry. Probably Banks are also using AI directly or indirectly and are exposed to the “Hallucination Risks”.

In this context, we have tried to find logic for the DeepSeek incident reported in these columns to find out reasons for hallucination.

Some of the  standard reasons quoted for hallucination are

1.Training Data Deficiency

2.Improper Model configuration

3. Knowledge gaps

4.Incorrect decoding

5. Ambiguous prompts

etc.

However, the Deep Seek response related to personal data of Indians being sold and money credited to some Cayman island account with HSBC, the bribing suggestions, the whistle blower silencing strategies donot fit into any known reasons.

I would like a research being conducted specifically on the Deep Seek responses to identify how the models are being built for such irresponsible behaviour.

It is time for us to question the Meity if they are aware of such AI related risks and whether any Government projects are potential victims to such  risks. MeitY has declared many bank systems as “Protected Systems” and taken over the responsibility of  security oversight in such Banks. Meity needs to clarify if they have taken steps to  mitigate AI risks in such Banks.

Naavi

Posted in Privacy | Leave a comment