The FREE-AI report of RBI on AI adoption by Financial Sector

Continued from previous post

The 103 page report of the Committee chaired by Dr Pushpak Bhattacharyya has laid out the following 7 Sutras.

Sl. No. Description
1 Trust  is  the  Foundation: Trust  is  non-negotiable  and  should  remain uncompromised
2 People First: AI should  augment  human decision-making  but defer to
human judgment and citizen interest
3 Innovation over Restraint: Foster responsible innovation with purpose
4 Fairness and Equity: AI outcomes should be fair and non-discriminatory
5 Accountability: Accountability rests with the entities deploying AI
6 Understandable by Design: Ensure explainability for trust
7 Safety, Resilience, and Sustainability: AI systems should be secure,
resilient and energy efficient

The Committee has recommended the following 26 recommendations under Six pillars.

Sl.

No.

Description Action and

Timeline

Innovation Enablement Framework
Infrastructure Pillar
1 Financial   Sector   Data   Infrastructure:  A high-quality financial sector data infrastructure should be established, as a digital public infrastructure, to help build trustworthy AI models for the financial sector. It may be integrated with the AI Kosh – India Datasets Platform, established under the IndiaAI Mission. Regulators and Government, Short term
2 AI Innovation Sandbox: An AI innovation sandbox for the financial sector should be established to enable REs, FinTechs, and other innovators to develop AI-driven solutions, algorithms, and models in a secure and controlled environment. Other   FSRs   should   also   collaborate   to contribute to and benefit from this initiative. Regulators RBI, MeitY, FSRs, Short term
3 Incentives and Funding Support: Appropriate incentive structures and infrastructure must be put in place to encourage inclusive and equitable AI usage among smaller entities. To support innovation and to meet strategic sectoral needs, RBI may also consider allocating a fund for setting up of data, compute infrastructure. RBI and Government, Medium term
4 Indigenous    Financial    Sector    Specific    AI    Models:

Indigenous AI models (including LLMs, SLMs, or non LLM models) tailored specifically for the financial sector should be developed and offered as a public good.

Regulators,SROs     and Industry, Medium term
5 Integrating AI with DPI: An enabling framework should be established to integrate AI with DPI in order to accelerate the delivery of inclusive, affordable financial services at scale. Regulators, Medium term
Policy Pillar
6 Adaptive and Enabling Policies: Regulators should periodically undertake an assessment of existing policies and legal frameworks to ensure they effectively enable the AI- driven innovations and address the AI-specific risks. Regulators should develop a comprehensive AI policy framework  for  the  financial  sector, anchored  in  the Committee’s 7 Sutras to provide flexible, forward-looking guidance for AI innovation, adoption, and risk mitigation across the sector.  The RBI may consider issuing consolidated AI Guidance to serve as a single point of reference for regulated entities and the broader FinTech ecosystem on the responsible design, development, and deployment of AI solutions. RBI, Medium term
7 Enabling AI-Based Affirmative Action: Regulators should encourage AI-driven innovation that accelerates financial inclusion of underserved and unserved sections of society and other such affirmative actions by lowering compliance expectations as far as is possible, without compromising basic safeguards. Regulators,Medium term
8 AI Liability Framework: Since AI systems are probabilistic and non-deterministic, regulators should adopt a graded liability framework that encourages responsible innovation. While REs must continue to remain liable for any loss suffered by customers, an accommodative supervisory approach where the RE has followed appropriate safety mechanisms such as incident reporting, audits, red teaming etc., is recommended. This tolerant supervisory stance should be limited to first time / one-off aberrations and denied in the event of repeated breaches, gross negligence, or failure to remediate identified issues. Regulators, Medium term
9

AI Institutional Framework: A permanent multi-stakeholder AI Standing Committee should be constituted under the Reserve Bank of India to continuously advise it on emerging opportunities and risks, monitor the evolution of AI technology, and assess the ongoing relevance of current regulatory frameworks. The Committee may be constituted for an initial period of five years, with a built-in review mechanism and a sunset clause. A dedicated institution should be established for the financial sector, operating under a hub-and-spoke model to the national-level AI Safety Institute, for continuous monitoring and sectoral coordination.

Regulators, RBI, Short term
Capacity Pillar
10 Capacity  Building  within  REs:  REs  should  develop  AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption. REs, Medium term
11 Capacity   Building   for   Regulators   and   Supervisors: Regulators and supervisors should invest in training and institutional capacity building initiatives to ensure that they possess an adequate understanding of AI technologies and to ensure that the regulatory and supervisory frameworks match the evolving landscape of AI, including associated risks and ethical considerations. RBI may consider establishing a dedicated AI institute to support sector-wide capacity development. RBI, Medium term
12 Framework  for  Sharing  Best  Practices:  The  financial services industry, through bodies such as IBA or SROs, should establish a framework for the exchange of AI-related use cases, lessons learned, and best practices and promote responsible scaling by highlighting positive outcomes, challenges, and sound governance frameworks. Industry Association / SRO, Medium term
13 Recognise   and   Reward   Responsible   AI   Innovation: Regulators and industry bodies should introduce structured programs to recognise and reward responsible AI innovation in the financial sector, particularly those that demonstrate positive social impact and embed ethical considerations by design. Regulators and Industry, Medium term
Risk Mitigation Framework
Governance Pillar
14 Board  Approved  AI  Policy:  To  ensure  the  safe  and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template. REs and Industry, Medium term
15 Data Lifecycle Governance: REs must establish robust data governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle. REs, Medium term
16 AI System Governance Framework:  REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications. REs, Medium term
17 Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks. REs,Medium term
Protection Pillar
18 Consumer  Protection:  REs  should establish  a  board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights. REs, Medium term
19 Cybersecurity   Measures:   REs   must   identify   potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms. REs, Medium term
20 Red Teaming: REs should establish structured red teaming processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes. REs, Medium term
21 Business  Continuity  Plan  for  AI  Systems:  REs  must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills. REs, Medium term
22 AI  Incident  Reporting  and  Sectoral  Risk  Intelligence

Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure.

REs,Regulators Medium term
Assurance Pillar
23 AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymisation of entity details. Regulators and  REs, Short term
24 AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs.

a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI applications.

b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken.

c. Periodic Review: The overall audit framework should be reviewed  and  updated  at  least  biennially  to  incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance.

Supervisors and REs, Medium term
25 Disclosures   by   REs:   REs   should   include   AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions. REs, Regulators, Short term
26 AI  Toolkit: AI  Compliance  Toolkit  will  help  REs  validate, benchmark, and demonstrate compliance against key responsible AI principles such as fairness, transparency, accountability, and robustness. The toolkit should be developed and maintained by a recognised SRO or industry body. Regulators and Industry, Medium term

We shall analyse the report as we go forward.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Privacy. Bookmark the permalink.