Quantum activities in India

It was a pleasant surprise today to find out that a few private sector companies in India have already made a breakthrough in Quantum computing. Heard Mr Nagendra Nagaraja of Bengaluru and his company qpiai.tech

It was also good to note that the company is  focused  on being a “Product  Company” and also support SME/MSMEs.

One of the objectives of the Indian Quantum Mission is to develop intermediate scale quantum computers with 50-100 physical qubits in 5 years and 50-1000 physical qubits in 8 years. It was nice to hear that Mr Nagaraj with his team has already developed a 25 qubit system and is planning to reach the 1000 qubit system target by 2029-2030.

The other objects of the quantum mission are

  • Establish satellite-based secure quantum communications over 2000 kilometers within India

  • Create inter-city quantum key distribution networks spanning 2000 km

  • Develop quantum sensors including magnetometers and atomic clocks for precision applications

The Government has also announced four thematic hubs for quantum research with IISC, Bengaluru being one of them along with IITs in Delhi, Mumbai and Chennai. DRDO is also collaborating with TCS and TIFR for the development of indigenous quantum processors. HCL and Tech Mahindra are also working on developing quantum software and algorithms.

Apart from the Quantum Research Park  and nearly 15 start ups in Bengaluru, a large Quantum tech park is envisaged in Amaravati, Andhra Pradesh.

The integration of Quantum and AI technologies may open opportunities in Quantum Machine Learning for enhanced pattern recognition, Accelerated ML model training  and advanced optimization algorithms.

Hopefully, India would be making huge strides in the field to catch up with countries like US and China in the near future.

We wish all the innovative entrepreneurs who are working in the Quantum plus AI field a grand success.

Naavi

PS: while trying to browse qpiai.tech, donot be confused with similar looking domain names such as qpai.tech. Wish both these domains  put up the “Lookalikes” disclosure.

Posted in Privacy | Leave a comment

Happy Independence Day 2025 to all

Posted in Privacy | Leave a comment

Disclosure and Assurance document under DGPSI-AI

DGPSI-AI is a Pioneering and Forward thinking  framework which establishes India as a leader in AI deployment is an assessment of the leading LLMs such as ChatGPT, Gemini, Perplexity and DeepSeek.

It takes time for the Indian AI eco system to understand the reason why these LLMs place DGPSI-AI in high esteem. As we proceed to explain the DGPSI-AI in greater detail the reasons would present by themselves.

One of the implementation specifications envisaged by the DGPSI-AI framework is the classification of a software as “AI” and “Non -AI” through documentation.

 DGPSI starts with a classification of Data as “Non Personal” and “Personal” and further  “Personal Data” itself is classified as  “Covered under  DPDPA” and “Covered under other country laws”. Similarly, before DGPSI-AI implementation starts, it is necessary to classify the software as “AI” and “Non AI”. This also means that there has to be a “Process Inventory” and “Software Inventory” which are pre-requisites for identification of “AI-Process”.

In this process, it is intended that the Data Fiduciary who purchases a software which is branded as “AI Embedded” or “AI-Inside”, shall insist that the licensor incorporates a “Disclosure and Assurance” to the following effect.

“Original Code of this software developed by …………………… is capable/Not capable of modifying its code without human intervention from the outputs generated and has been tested and assured as safe for personal data processing for DPDPA Compliance”.

This declaration identifies the original accountability of the AI software (which is a requirement under ITA 2000 compliance) and incorporates the first requirement to identify the software as AI.

This may be one of the mandatory contract clauses recommended to be  used in every software supply contract.

I request the readers to add their comment on the feasibility and desirability of such a clause and whether it can be voluntarily adopted or requires a mandate from the Government. I look forward to your views.

Naavi

Posted in Privacy | Leave a comment

Observations on the FREE AI Committee Report

Continued from earlier posts:

The FREE AI report of the of Dr Pushpak Bhattacharyya has submitted a report to RBI consisting of 26 recommendations.

For these 26 recommendations , action and time line responsibilities have also been assigned. Twelve of the actions ( 1, 2, 3, 4, 5, 6,7, 8, 9 , 11, 13 and 23) are indicated as responsibilities of Regulators and Government. Industry and SRO s are indicated as responsible for some of the actions. (4,12,13* and 14) .

13 action points (10, 14,15,16,17,18,19,20,21,22,23,24 and 25) are attributed to REs and they are listed below. These REs are the Data Fiduciaries to whom DGPSI-AI is applicable.

These requirements can be summarised below.

 

No Requirement
10 Capacity Building within REs: REs should develop AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption.
14 Board Approved AI Policy: To ensure the safe and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template.
15 Data Lifecycle Governance: REs must establish robust data  governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle.
16 AI System Governance Framework: REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications.
17 Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks.
18 Consumer Protection: REs     should establish a board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights.
19 Cybersecurity     Measures:     REs must identify     potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms.
20 Red Teaming: REs should establish structured red teaming  processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes.
21 Business Continuity Plan for AI Systems: REs     must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills.
22 AI Incident Reporting and Sectoral Risk Intelligence  Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure.
23 AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymization of entity details.
24

AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs.

a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI application

b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken.

c. Periodic Review: The overall audit framework should be reviewed and updated at least biennially to incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance.

25

Disclosures     by     REs:     REs should     include     AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions.

Readers may kindly map DGPSI-AI with this list. At first glance DGPSI-AI seems to cover all these aspects.

Continued….

Naavi

Posted in Privacy | Leave a comment

The FREE-AI report of RBI on AI adoption by Financial Sector

Continued from previous post

The 103 page report of the Committee chaired by Dr Pushpak Bhattacharyya has laid out the following 7 Sutras.

Sl. No. Description
1 Trust  is  the  Foundation: Trust  is  non-negotiable  and  should  remain uncompromised
2 People First: AI should  augment  human decision-making  but defer to human judgment and citizen interest
3 Innovation over Restraint: Foster responsible innovation with purpose
4 Fairness and Equity: AI outcomes should be fair and non-discriminatory
5 Accountability: Accountability rests with the entities deploying AI
6 Understandable by Design: Ensure explainability for trust
7 Safety, Resilience, and Sustainability: AI systems should be secure, resilient and energy efficient

The Committee has recommended the following 26 recommendations under Six pillars.

Sl.

No.

Description Action and

Timeline

Innovation Enablement Framework
Infrastructure Pillar
1 Financial   Sector   Data   Infrastructure:  A high-quality financial sector data infrastructure should be established, as a digital public infrastructure, to help build trustworthy AI models for the financial sector. It may be integrated with the AI Kosh – India Datasets Platform, established under the IndiaAI Mission. Regulators and Government, Short term
2 AI Innovation Sandbox: An AI innovation sandbox for the financial sector should be established to enable REs, FinTechs, and other innovators to develop AI-driven solutions, algorithms, and models in a secure and controlled environment. Other   FSRs   should   also   collaborate   to contribute to and benefit from this initiative. Regulators RBI, MeitY, FSRs, Short term
3 Incentives and Funding Support: Appropriate incentive structures and infrastructure must be put in place to encourage inclusive and equitable AI usage among smaller entities. To support innovation and to meet strategic sectoral needs, RBI may also consider allocating a fund for setting up of data, compute infrastructure. RBI and Government, Medium term
4 Indigenous    Financial    Sector    Specific    AI    Models:

Indigenous AI models (including LLMs, SLMs, or non LLM models) tailored specifically for the financial sector should be developed and offered as a public good.

Regulators, SROs     and Industry, Medium term
5 Integrating AI with DPI: An enabling framework should be established to integrate AI with DPI in order to accelerate the delivery of inclusive, affordable financial services at scale. Regulators, Medium term
Policy Pillar
6 Adaptive and Enabling Policies: Regulators should periodically undertake an assessment of existing policies and legal frameworks to ensure they effectively enable the AI- driven innovations and address the AI-specific risks. Regulators should develop a comprehensive AI policy framework  for  the  financial  sector, anchored  in  the Committee’s 7 Sutras to provide flexible, forward-looking guidance for AI innovation, adoption, and risk mitigation across the sector.  The RBI may consider issuing consolidated AI Guidance to serve as a single point of reference for regulated entities and the broader FinTech ecosystem on the responsible design, development, and deployment of AI solutions. RBI, Medium term
7 Enabling AI-Based Affirmative Action: Regulators should encourage AI-driven innovation that accelerates financial inclusion of underserved and unserved sections of society and other such affirmative actions by lowering compliance expectations as far as is possible, without compromising basic safeguards. Regulators,Medium term
8 AI Liability Framework: Since AI systems are probabilistic and non-deterministic, regulators should adopt a graded liability framework that encourages responsible innovation. While REs must continue to remain liable for any loss suffered by customers, an accommodative supervisory approach where the RE has followed appropriate safety mechanisms such as incident reporting, audits, red teaming etc., is recommended. This tolerant supervisory stance should be limited to first time / one-off aberrations and denied in the event of repeated breaches, gross negligence, or failure to remediate identified issues. Regulators, Medium term
9

AI Institutional Framework: A permanent multi-stakeholder AI Standing Committee should be constituted under the Reserve Bank of India to continuously advise it on emerging opportunities and risks, monitor the evolution of AI technology, and assess the ongoing relevance of current regulatory frameworks. The Committee may be constituted for an initial period of five years, with a built-in review mechanism and a sunset clause. A dedicated institution should be established for the financial sector, operating under a hub-and-spoke model to the national-level AI Safety Institute, for continuous monitoring and sectoral coordination.

Regulators, RBI, Short term
Capacity Pillar
10 Capacity  Building  within  REs:  REs  should  develop  AI-related capacity and governance competencies for the Board and C suite, as well as structured and continuous training, upskilling, and reskilling programs across the broader workforce who use AI, to effectively mitigate AI risks and guide ethical as well as ensure responsible AI adoption. REs, Medium term
11 Capacity   Building   for   Regulators   and   Supervisors: Regulators and supervisors should invest in training and institutional capacity building initiatives to ensure that they possess an adequate understanding of AI technologies and to ensure that the regulatory and supervisory frameworks match the evolving landscape of AI, including associated risks and ethical considerations. RBI may consider establishing a dedicated AI institute to support sector-wide capacity development. RBI, Medium term
12 Framework  for  Sharing  Best  Practices:  The  financial services industry, through bodies such as IBA or SROs, should establish a framework for the exchange of AI-related use cases, lessons learned, and best practices and promote responsible scaling by highlighting positive outcomes, challenges, and sound governance frameworks. Industry Association / SRO, Medium term
13 Recognise   and   Reward   Responsible   AI   Innovation: Regulators and industry bodies should introduce structured programs to recognise and reward responsible AI innovation in the financial sector, particularly those that demonstrate positive social impact and embed ethical considerations by design. Regulators and Industry, Medium term
Risk Mitigation Framework
Governance Pillar
14 Board  Approved  AI  Policy:  To  ensure  the  safe  and responsible adoption of AI within institutions, REs should establish a board-approved AI policy which covers key areas such as governance structure, accountability, risk appetite, operational safeguards, auditability, consumer protection measures, AI disclosures, model life cycle framework, and liability framework. Industry bodies should support smaller entities with an indicative policy template. REs and Industry, Medium term
15 Data Lifecycle Governance: REs must establish robust data governance frameworks, including internal controls and policies for data collection, access, usage, retention, and deletion for AI systems. These frameworks should ensure compliance with the applicable legislations, such as the DPDP Act, throughout the data life cycle. REs, Medium term
16 AI System Governance Framework:  REs must implement robust model governance mechanisms covering the entire AI model lifecycle, including model design, development, deployment, and decommissioning. Model documentation, validation, and ongoing monitoring, including mechanisms to detect and address model drift and degradation, should be carried out to ensure safe usage. REs should also put in place strong governance before deploying autonomous AI systems that are capable of acting independently in financial decision- making. Given the higher potential for real world consequences, this should include human oversight, especially for medium and high-risk use cases and applications. REs, Medium term
17 Product Approval Process: REs should ensure that all AI- enabled products and solutions are brought within the scope of the institutional product approval framework, and that AI- specific risk evaluations are included in the product approval frameworks. REs,Medium term
Protection Pillar
18 Consumer  Protection:  REs  should establish  a  board- approved consumer protection framework that prioritises transparency, fairness, and accessible recourse mechanisms for customers. REs must invest in ongoing education campaigns to raise consumer awareness regarding safe AI usage and their rights. REs, Medium term
19 Cybersecurity   Measures:   REs   must   identify   potential security risks on account of their use of AI and strengthen their cybersecurity ecosystems (hardware, software, processes) to address them. REs may also make use of AI tools to strengthen cybersecurity, including dynamic threat detection and response mechanisms. REs, Medium term
20 Red Teaming: REs should establish structured red teaming processes that span the entire AI lifecycle. The frequency and intensity of red teaming should be proportionate to the assessed risk level and potential impact of the AI application, with higher risk models being subject to more frequent and comprehensive red teaming. Trigger-based red teaming should also be considered to address evolving threats and changes. REs, Medium term
21 Business  Continuity  Plan  for  AI  Systems:  REs  must augment their existing BCP frameworks to include both traditional system failures as well as AI model-specific performance degradation. REs should establish fallback mechanisms and periodically test the fallback workflows and AI model resilience through BCP drills. REs, Medium term
22 AI  Incident  Reporting  and  Sectoral  Risk  Intelligence

Framework: Financial sector regulators should establish a dedicated AI incident reporting framework for REs and FinTechs and encourage timely detection and reporting of AI- related incidents. The framework should adopt a tolerant, good-faith approach to encourage timely disclosure.

REs,Regulators Medium term
Assurance Pillar
23 AI Inventory within REs and Sector-Wide Repository: REs should maintain a comprehensive, internal AI inventory that includes all models, use cases, target groups, dependencies, risks and grievances, updated at least half yearly, and it must be made available for supervisory inspections and audits. In parallel, regulators should establish a sector-wide AI repository that tracks AI adoption trends, concentration risks, and systemic vulnerabilities across the financial system with due anonymisation of entity details. Regulators and  REs, Short term
24 AI Audit Framework: REs should implement a comprehensive, risk-based, calibrated AI audit framework, aligned with a board-approved AI risk categorisation, to ensure responsible adoption across the AI lifecycle, covering data inputs, model and algorithm, and the decision outputs.

a. Internal Audits: As the first level, REs should conduct internal audits proportionate to the risk level of AI applications.

b. Third-Party Audits: For high risk or complex AI use cases, independent third-party audits should be undertaken.

c. Periodic Review: The overall audit framework should be reviewed  and  updated  at  least  biennially  to  incorporate emerging risks, technologies, and regulatory developments. Supervisors should also develop AI-specific audit frameworks, with clear guidance on what to audit, how to assess it, and how to demonstrate compliance.

Supervisors and REs, Medium term
25 Disclosures   by   REs:   REs   should   include   AI-related disclosures in their annual reports and websites. Regulators should specify an AI-specific disclosure framework to ensure consistency and adequacy of information across institutions. REs, Regulators, Short term
26 AI  Toolkit: AI  Compliance  Toolkit  will  help  REs  validate, benchmark, and demonstrate compliance against key responsible AI principles such as fairness, transparency, accountability, and robustness. The toolkit should be developed and maintained by a recognised SRO or industry body. Regulators and Industry, Medium term

We shall analyse the report as we go forward.

Continued…

Naavi

Posted in Privacy | Leave a comment

RBI releases a framework for AI in Financial sector

On August 13, RBI released a report of the committee to develop a Framework for responsible and ethical enablement of AI (FREE-AI) in the Financial sector.

Copy of the report is available here:

The committee has developed 7 Sutras to serve as foundation principles and 26 actional recommendations.

It is a coincidence that the framework coincides the release of DGPSI-AI framework developed independently with six  foundation principles and Nine implementation specifications.

We welcome the release of the report and await its adoption.

Continued…

Naavi

Posted in Privacy | Leave a comment