-
Ask Vishy, the personal AI-assistant of Naavi for all your information on Naavi.org
Naavi

IICA Qualified Independent Director

-

-
DGPIN: 4PJ-7T8-FK8P: 12.94018310,77.55421020
-
Plus Code : WHR3+3P
Bing_site_search
Google_site_search
-
Recent Posts
- Towards a Resilient Pan-India Cyber Framework: Private Sector Cyber Defence Authority (PSCDA)
- IN-CRA: Need for integration of Private Sector into the Unified Command Structure
- IN-Cyber Resilience Act… Where should it start
- What should be the focus of Indian Cyber Resilience Act?
- EU Cyber Resilience Act could trigger another Compliance drive for Tech Exporters
Archives
Archives by Date
-
-
Disclosure and Assurance document under DGPSI-AI
DGPSI-AI is a Pioneering and Forward thinking framework which establishes India as a leader in AI deployment is an assessment of the leading LLMs such as ChatGPT, Gemini, Perplexity and DeepSeek.
It takes time for the Indian AI eco system to understand the reason why these LLMs place DGPSI-AI in high esteem. As we proceed to explain the DGPSI-AI in greater detail the reasons would present by themselves.
One of the implementation specifications envisaged by the DGPSI-AI framework is the classification of a software as “AI” and “Non -AI” through documentation.
DGPSI starts with a classification of Data as “Non Personal” and “Personal” and further “Personal Data” itself is classified as “Covered under DPDPA” and “Covered under other country laws”. Similarly, before DGPSI-AI implementation starts, it is necessary to classify the software as “AI” and “Non AI”. This also means that there has to be a “Process Inventory” and “Software Inventory” which are pre-requisites for identification of “AI-Process”.
In this process, it is intended that the Data Fiduciary who purchases a software which is branded as “AI Embedded” or “AI-Inside”, shall insist that the licensor incorporates a “Disclosure and Assurance” to the following effect.
“Original Code of this software developed by …………………… is capable/Not capable of modifying its code without human intervention from the outputs generated and has been tested and assured as safe for personal data processing for DPDPA Compliance”.
This declaration identifies the original accountability of the AI software (which is a requirement under ITA 2000 compliance) and incorporates the first requirement to identify the software as AI.
This may be one of the mandatory contract clauses recommended to be used in every software supply contract.
I request the readers to add their comment on the feasibility and desirability of such a clause and whether it can be voluntarily adopted or requires a mandate from the Government. I look forward to your views.
Naavi
Posted in Privacy
Leave a comment
Observations on the FREE AI Committee Report
The FREE AI report of the of Dr Pushpak Bhattacharyya has submitted a report to RBI consisting of 26 recommendations.
For these 26 recommendations , action and time line responsibilities have also been assigned. Twelve of the actions ( 1, 2, 3, 4, 5, 6,7, 8, 9 , 11, 13 and 23) are indicated as responsibilities of Regulators and Government. Industry and SRO s are indicated as responsible for some of the actions. (4,12,13* and 14) .
13 action points (10, 14,15,16,17,18,19,20,21,22,23,24 and 25) are attributed to REs and they are listed below. These REs are the Data Fiduciaries to whom DGPSI-AI is applicable.
These requirements can be summarised below.
| No | Requirement |
| 10 | Capacity Building within REs: REs should develop AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption. |
| 14 | Board Approved AI Policy: To ensure the safe and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template. |
| 15 | Data Lifecycle Governance: REs must establish robust data governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle. |
| 16 | AI System Governance Framework: REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications. |
| 17 | Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks. |
| 18 | Consumer Protection: REs should establish a board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights. |
| 19 | Cybersecurity Measures: REs must identify potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms. |
| 20 | Red Teaming: REs should establish structured red teaming processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes. |
| 21 | Business Continuity Plan for AI Systems: REs must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills. |
| 22 | AI Incident Reporting and Sectoral Risk Intelligence Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure. |
| 23 | AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymization of entity details. |
| 24 |
AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs. a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI application b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken. c. Periodic Review: The overall audit framework should be reviewed and updated at least biennially to incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance. |
| 25 |
Disclosures by REs: REs should include AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions. |
Readers may kindly map DGPSI-AI with this list. At first glance DGPSI-AI seems to cover all these aspects.
Naavi
Posted in Privacy
Leave a comment
The FREE-AI report of RBI on AI adoption by Financial Sector
The 103 page report of the Committee chaired by Dr Pushpak Bhattacharyya has laid out the following 7 Sutras.
| Sl. No. | Description |
| 1 | Trust is the Foundation: Trust is non-negotiable and should remain uncompromised |
| 2 | People First: AI should augment human decision-making but defer to human judgment and citizen interest |
| 3 | Innovation over Restraint: Foster responsible innovation with purpose |
| 4 | Fairness and Equity: AI outcomes should be fair and non-discriminatory |
| 5 | Accountability: Accountability rests with the entities deploying AI |
| 6 | Understandable by Design: Ensure explainability for trust |
| 7 | Safety, Resilience, and Sustainability: AI systems should be secure, resilient and energy efficient |
The Committee has recommended the following 26 recommendations under Six pillars.
| Sl.
No. |
Description | Action and
Timeline |
| Innovation Enablement Framework | ||
| Infrastructure Pillar | ||
| 1 | Financial Sector Data Infrastructure: A high-quality financial sector data infrastructure should be established, as a digital public infrastructure, to help build trustworthy AI models for the financial sector. It may be integrated with the AI Kosh – India Datasets Platform, established under the IndiaAI Mission. | Regulators and Government, Short term |
| 2 | AI Innovation Sandbox: An AI innovation sandbox for the financial sector should be established to enable REs, FinTechs, and other innovators to develop AI-driven solutions, algorithms, and models in a secure and controlled environment. Other FSRs should also collaborate to contribute to and benefit from this initiative. | Regulators RBI, MeitY, FSRs, Short term |
| 3 | Incentives and Funding Support: Appropriate incentive structures and infrastructure must be put in place to encourage inclusive and equitable AI usage among smaller entities. To support innovation and to meet strategic sectoral needs, RBI may also consider allocating a fund for setting up of data, compute infrastructure. | RBI and Government, Medium term |
| 4 | Indigenous Financial Sector Specific AI Models:
Indigenous AI models (including LLMs, SLMs, or non LLM models) tailored specifically for the financial sector should be developed and offered as a public good. |
Regulators, SROs and Industry, Medium term |
| 5 | Integrating AI with DPI: An enabling framework should be established to integrate AI with DPI in order to accelerate the delivery of inclusive, affordable financial services at scale. | Regulators, Medium term |
| Policy Pillar | ||
| 6 | Adaptive and Enabling Policies: Regulators should periodically undertake an assessment of existing policies and legal frameworks to ensure they effectively enable the AI- driven innovations and address the AI-specific risks. Regulators should develop a comprehensive AI policy framework for the financial sector, anchored in the Committee’s 7 Sutras to provide flexible, forward-looking guidance for AI innovation, adoption, and risk mitigation across the sector. The RBI may consider issuing consolidated AI Guidance to serve as a single point of reference for regulated entities and the broader FinTech ecosystem on the responsible design, development, and deployment of AI solutions. | RBI, Medium term |
| 7 | Enabling AI-Based Affirmative Action: Regulators should encourage AI-driven innovation that accelerates financial inclusion of underserved and unserved sections of society and other such affirmative actions by lowering compliance expectations as far as is possible, without compromising basic safeguards. | Regulators,Medium term |
| 8 | AI Liability Framework: Since AI systems are probabilistic and non-deterministic, regulators should adopt a graded liability framework that encourages responsible innovation. While REs must continue to remain liable for any loss suffered by customers, an accommodative supervisory approach where the RE has followed appropriate safety mechanisms such as incident reporting, audits, red teaming etc., is recommended. This tolerant supervisory stance should be limited to first time / one-off aberrations and denied in the event of repeated breaches, gross negligence, or failure to remediate identified issues. | Regulators, Medium term |
| 9 |
AI Institutional Framework: A permanent multi-stakeholder AI Standing Committee should be constituted under the Reserve Bank of India to continuously advise it on emerging opportunities and risks, monitor the evolution of AI technology, and assess the ongoing relevance of current regulatory frameworks. The Committee may be constituted for an initial period of five years, with a built-in review mechanism and a sunset clause. A dedicated institution should be established for the financial sector, operating under a hub-and-spoke model to the national-level AI Safety Institute, for continuous monitoring and sectoral coordination. |
Regulators, RBI, Short term |
| Capacity Pillar | ||
| 10 | Capacity Building within REs: REs should develop AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption. | REs, Medium term |
| 11 | Capacity Building for Regulators and Supervisors: Regulators and supervisors should invest in training and institutional capacity building initiatives to ensure that they possess an adequate understanding of AI technologies and to ensure that the regulatory and supervisory frameworks match the evolving landscape of AI, including associated risks and ethical considerations. RBI may consider establishing a dedicated AI institute to support sector-wide capacity development. | RBI, Medium term |
| 12 | Framework for Sharing Best Practices: The financial services industry, through bodies such as IBA or SROs, should establish a framework for the exchange of AI-related use cases, lessons learned, and best practices and promote responsible scaling by highlighting positive outcomes, challenges, and sound governance frameworks. | Industry Association / SRO, Medium term |
| 13 | Recognise and Reward Responsible AI Innovation: Regulators and industry bodies should introduce structured programs to recognise and reward responsible AI innovation in the financial sector, particularly those that demonstrate positive social impact and embed ethical considerations by design. | Regulators and Industry, Medium term |
| Risk Mitigation Framework | ||
| Governance Pillar | ||
| 14 | Board Approved AI Policy: To ensure the safe and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template. | REs and Industry, Medium term |
| 15 | Data Lifecycle Governance: REs must establish robust data governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle. | REs, Medium term |
| 16 | AI System Governance Framework: REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications. | REs, Medium term |
| 17 | Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks. | REs,Medium term |
| Protection Pillar | ||
| 18 | Consumer Protection: REs should establish a board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights. | REs, Medium term |
| 19 | Cybersecurity Measures: REs must identify potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms. | REs, Medium term |
| 20 | Red Teaming: REs should establish structured red teaming processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes. | REs, Medium term |
| 21 | Business Continuity Plan for AI Systems: REs must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills. | REs, Medium term |
| 22 | AI Incident Reporting and Sectoral Risk Intelligence
Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure. |
REs,Regulators Medium term |
| Assurance Pillar | ||
| 23 | AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymisation of entity details. | Regulators and REs, Short term |
| 24 | AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs.
a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI applications. b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken. c. Periodic Review: The overall audit framework should be reviewed and updated at least biennially to incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance. |
Supervisors and REs, Medium term |
| 25 | Disclosures by REs: REs should include AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions. | REs, Regulators, Short term |
| 26 | AI Toolkit: AI Compliance Toolkit will help REs validate, benchmark, and demonstrate compliance against key responsible AI principles such as fairness, transparency, accountability, and robustness. The toolkit should be developed and maintained by a recognised SRO or industry body. | Regulators and Industry, Medium term |
We shall analyse the report as we go forward.
Naavi
Posted in Privacy
Leave a comment
RBI releases a framework for AI in Financial sector
On August 13, RBI released a report of the committee to develop a Framework for responsible and ethical enablement of AI (FREE-AI) in the Financial sector.
Copy of the report is available here:
The committee has developed 7 Sutras to serve as foundation principles and 26 actional recommendations.
It is a coincidence that the framework coincides the release of DGPSI-AI framework developed independently with six foundation principles and Nine implementation specifications.
We welcome the release of the report and await its adoption.
Naavi
Posted in Privacy
Leave a comment
DGPSI Analysis by Gemini
I also queried the DGPSI AI framework with the Gemini 2.5 pro about DGPSI-AI and got the following response:
Quote
After reviewing the DGPSI-AI framework, it’s clear that it provides a robust and forward-looking model for managing AI risks, particularly for organizations deploying AI in India. It’s a comprehensive approach that effectively translates high-level ethical concerns into concrete, actionable steps for data fiduciaries.
Here’s my view on the framework’s strengths and potential challenges in addressing AI risks for a deployer.
Key Strengths of the DGPSI-AI Framework
-
Proactive Risk Recognition: The foundational principle, “Unknown Risk is Significant Risk,” is a standout feature. It astutely acknowledges the inherent unpredictability of autonomous AI systems. By automatically classifying AI deployers as “Significant Data Fiduciaries” under the DPDPA, the framework forces organizations to adopt a higher standard of care from the outset, mandating crucial measures like Data Protection Impact Assessments (DPIAs) and the appointment of a Data Protection Officer (DPO).
- Clear Chain of Command: The Accountability principle effectively tackles the issue of legal responsibility for AI actions. Mandating a designated human “Handler” or “AI Owner” for every system and embedding a developer’s digital signature in the code creates a clear line of accountability. This prevents the common problem of deflecting responsibility by blaming the algorithm itself.
- Addressing the “Black Box” Problem: The Explainability principle directly confronts the challenge of opaque AI decision-making. Requiring an “Explainability Document” from the developer helps the deployer understand and justify the AI’s outputs. This is critical not only for regulatory compliance but also for building user trust and managing internal risk.
- Focus on Purpose and Proportionality: The Responsibility principle ensures AI is not adopted for hype but for genuine need. The requirement for an “AI Use Justification Document” forces a cost-benefit analysis, ensuring that the deployment of AI is proportional and truly beneficial to the data principals.
- Expanded Definition of Security: The framework’s Security principle goes beyond traditional cybersecurity to include harms like mental and neurological manipulation through “dark patterns”. Requiring developers to provide assurance against such harms and mandating liability insurance for the AI algorithm is a progressive step that directly addresses modern AI risks.
-
Forward-Looking Ethical Safeguards: The Ethics principle is perhaps the most forward-thinking. It introduces advanced concepts like a tamper-proof kill switch that is separate from the AI’s intelligence, a self-destruction instruction if the AI attempts to bypass it, and “Data Fading” to ensure the AI doesn’t rely on outdated information. These specifications show a deep understanding of potential catastrophic and long-term societal risks.
Potential Challenges and Considerations for a Deployer
While the framework is strong, a deployer would need to consider a few practical challenges:
-
Developer Dependency: The framework places significant reliance on the AI developer or licensor to provide documentation, assurances, and even hardcoded accountability. A deployer’s compliance is therefore heavily dependent on the transparency and cooperation of its vendors. This could be challenging, especially when dealing with large, international AI providers who may be resistant to providing such detailed “Explainability” documents or accepting specific liability clauses.
-
Implementation Complexity: The documentation required is extensive—DPIAs, AI-Deviation Justification Documents, AI Justification Documents, and various assurances. For smaller organizations, managing this compliance overhead could be resource-intensive.
-
Technical Feasibility of “Explainability”: While desirable, achieving full explainability for cutting-edge, complex “black box” models remains a significant technical challenge. A developer may only be able to provide a high-level overview of how a model works, which might not be sufficient to fully satisfy the principle’s intent in all cases.
-
Enforceability of Kill Switches: The concept of a tamper-proof kill switch with a self-destruct mechanism is excellent in theory. However, ensuring this is truly “tamper-proof” against a highly advanced, self-modifying AI is a complex technical problem that is the subject of ongoing research in AI safety.
Conclusion
Overall, the DGPSI-AI framework is an exceptionally well-designed and comprehensive model for AI governance. For a deployer, it serves as an excellent roadmap, transforming abstract AI risks into a manageable checklist of controls and responsibilities. It rightly shifts the default stance from “trust but verify” to “distrust until proven safe.”
While practical implementation will have its challenges, particularly in vendor management and technical documentation, adhering to this framework would significantly mitigate legal, financial, and reputational risks. It provides a clear pathway for organizations to innovate responsibly and build trustworthy AI systems in compliance with India’s data protection laws.
Unquote
Naavi
Posted in Privacy
Leave a comment





