![]() |
![]() |
![]() |
![]() |
![]() |
High Court on 10th September 2025 directed the State government to establish a fully functional and empowered Cyber Command Centre (CCC) to address the menace of digital crimes. It is stated that cyber crime complaints are now about 20% of the complaints received by the Police in Karnataka.
This has been a long felt need often recommended by the undersigned in the early years of Cyber Crime Police station and now the Court under Justice Nagaprasanna has issued an order to set up the centre and also added that the officials posted in this centre should not be frequently transferred. Court has also mentioned that political interference should be avoided.
The Court directed that all Cyber Crime investigations should be consolidated with the CCC to ensure uniformity, expertise and accountability. The control of CCC will rest with a separate DIG and move away from CID. For the time being it will function from the CID premises but all the 43 Cyber Crime Police stations in the State.
Following the order, the government has re-designated a senior IPS officer of 1994 batch – DGP, Internal Security Division (ISD), Police Computer Wing (PCW) and Cyber Crime & Narcotics (C&N), Pronab Mohanty as DG, Cyber Command. This will be the first such unit in India.
The CCC will have four wings namely
These developments followed a petition filed in December by a company, Newsspace Research and Technologies Pvt. Ltd., against some of its former employees for allegedly stealing the confidential data. In the past there have been several such complaints but the Judge in this case, honourable Justice Nagaprasanna and set up an SIT for investigation which has now evolved as the CCC.
The credit goes to the Judge and not to the political leadership of the State.
We suppose such CCCs will emerge in all other states and eventually, a National CCC should evolve taking Cyber Crime investigation and prosecution to a different level.
Hope Mr Amit Shah takes note of this developments and encourages other states to follow the example.
Naavi
Every day we get some new development in the AI world. We are all enthusiastic about the potential of AI in increasing the productivity of our organizations. Many SMEs/MSMEs and perhaps even the bigger organizations are restructuring their manpower to use AI for reducing the costs. Some believe that AI agentic force can replace whole teams of employees for a given task.
The capability of AI is certainly visible in accomplishing some of our routine tasks in fraction of seconds.
However another risk which we are viewing is the tendency of some of the employees to jump the gun and start using AI tools for improving their personal productivity creating their own personal AI agents. Some employers may be encouraging this and some may not even be aware.
This BYOAi or Bring your own AI tendency which is some times referred to as Shadow AI is a new threat vector for organizations.
While we at FDPPI are launching DGPSI-AI as an extended framework of DGPSI to assist organizations to mitigate the AI risk, it is necessary to first appreciate the extent of AI risks that is silently overcoming us.
In a recent compilation by an AI enthusiast Mr Damien R Charlton, more than 358 legal cases involving AI hallucinations were tracked . This included 227 cases in USA, 28 each in Israel and Australia. At a time when many were arguing that Courts can be replaced with AI and the AI tool is more honest than the dishonest Judiciary, the recent developments in observed hallucinations and rogue behaviour of AI have driven home a sense of caution.
A detailed analysis of these 358 cases is required to be attempted separately. But monetary sanctions have been indicated in many cases though the amount is only in thousands and not reached the millions and billions of dollars seen in GDPR and Competition Acts around the world. There have been public reprimand and warnings in most cases.
The highest penalty appeared to have been levied in the “Crypto open patent Alliance v Wright” amounting to 100,000 GBP stating “documents, which .. bore the stamp of having been written using an AI engine, contained a series of falsehoods,”
There were several other penalties such as GBP 24727 imposed in the Bandla V Solicitors Regulation Authority ,( UK High Court, 13th May 2025) , USD 31100 in USD in Lacey V State Farm General Insurance (California District Court, 6th may 2025) 7925 In re Boy, Appellate, ( Court of Illinois, July 21, 2025) both for filing fabricated case laws.
These indicate that AI does lie and fabricate outputs and develops content which is not reliable in responsible usage. Hence placing reliance on AI is extremely risky and replacing humans with AI an unwise move.
It is for this reason that DGPSI-AI considers AI risk is an “Unknown” risk and should be considered as Significant risk. All users of AI for personal data processing should be considered as “Significant Data Fiduciaries”. They need to designate DPOs, Do a DPIA and organize an annual data audit.
Considering these developments and unstoppable growth of AI, data auditors in India need to equip themselves with not only the knowledge of DPDPA but also of AI to some extent atleast to detect the use of AI and collect evidence of human oversight, possible hallucination etc. The data auditors need to also verify if any of the employees or their own use AI. In the Ethical declarations signed by employees, a disclosure of such usage should also be considered mandatory.
Naavi
FDPPI is launching the IDPS 2025, the flagship event of FDPPI on September 17, 2025
As a prelude to the conference and to ensure that all participants will get refreshed on on the underlying legal and technical background information, FDPPI is organizing a complimentary virtual program “Master Class on DPDPA and Introduction of AI to business Managers” tomorrow.
The program for September 12 is as follows.
Joining link has already been forwarded to all those who have registered for IDPS 2025-September 17 through the registration process.
If any others would like to register for IDPS 2025 now, they can do so at the above link. If any body wants to join the September 12th event as a special guest of FDPPI, they may contact us with their request.
We look forward to interacting with you both on September 12th and September 17. Event of September 17 will be a physical event which will happen in the MSR Institute’s auditorium and will be co-hosted by MSR School of Law and supported by FICCI.
Naavi
Our partners for IDPS 2025
(P.S: This is a guest post from Mr. M.G.Kodandaram, IRS, Advocate)
Artificial Intelligence (AI) is rapidly transforming the global financial sector, from automating credit assessments and fraud detection to enabling hyper-personalized financial services. For India, with its diverse population, rapidly growing fintech ecosystem, and robust digital public infrastructure, the potential of AI is particularly significant. But, with innovation comes a host of challenges – bias in algorithms, risks of systemic instability, questions of liability, and concerns over consumer protection.
Recognizing both the potential and the perils, the Reserve Bank of India (RBI) has taken a decisive step. As a starting point, the RBI conducted extensive surveys across regulated entities (REs) such as banks, NBFCs, and fintechs. The survey revealed that 20.8% of respondents are already deploying AI systems, primarily in customer support, sales, credit underwriting, and cybersecurity. At the same time, a striking 67% of entities expressed interest in exploring AI use cases.
This dual reality highlights India’s financial sector at an inflection point: a substantial number of institutions are experimenting with AI, while the majority remain in exploratory phases. The RBI thus saw an opportunity to frame a forward-looking, risk-sensitive regulatory framework that both encourages innovation and safeguards systemic integrity.
On 13 August 2025, it released the Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI Report)1, a comprehensive policy blueprint developed by the FREE-AI Committee constituted in December 2024. The Committee, chaired by Professor Pushpak Bhattacharya of IIT Bombay, was tasked with studying AI adoption in India’s financial services sector, identifying key risks, and recommending a structured regulatory response.
This article provides a detailed analysis of the RBI’s FREE-AI framework, its guiding principles, strategic recommendations, and implications for financial institutions, consumers, MSMEs, RegTechs, and the broader Indian economy. It also situates the framework within global regulatory developments, assessing how India’s approach aligns with international best practices.
The FREE-AI Report foresees AI as a transformative force in India’s financial sector, opening new frontiers of innovation, inclusion, and efficiency. It identifies a diverse spectrum of opportunities, beginning with financial inclusion: AI-driven multilingual and multimodal digital platforms, combined with advanced credit assessment tools, could extend banking and lending services to millions of underserved citizens who remain outside the formal financial system. A major thrust lies in integrating AI with India’s robust digital public infrastructure – UPI, Aadhaar, ONDC, and OCEN – where machine learning could personalize service delivery, streamline credit flows, and expand access across urban and rural markets alike.
Another vision is the call for indigenous AI development: creating machine learning models trained on Indian languages, cultural contexts, and socio-economic realities, ensuring solutions that are not only technologically advanced but also socially relevant. The report also points to AI agents that can operate as financial assistants, in areas like ‘comparing loan offers in real time, managing consumer accounts, automating compliance, and executing secure transactions’, thus empowering individuals with greater financial literacy and autonomy. Looking further ahead, synergies between AI and emerging technologies such as quantum computing hold the promise of revolutionizing financial modelling, portfolio optimization, and systemic risk analysis, placing India at the forefront of global financial innovation. Together, these possibilities suggest a leap towards a more inclusive, competitive, and technologically resilient financial ecosystem.
However, the FREE-AI Report tempers this optimism with a careful mapping of risks that could derail progress if left unchecked. Algorithmic bias, arising from skewed datasets, could hardwire inequalities into credit allocation and access to services, undermining the very goal of inclusion.
The report warns of the amplification of inaccuracies: in high-frequency trading or large-scale financial transactions, even small errors embedded in AI models could multiply rapidly, producing systemic shocks. A further concern lies in homogenization, where overreliance on similar AI models could reduce diversity in financial strategies, making markets less resilient to shocks. Equally troubling is the spectre of AI-driven market manipulation – systems reinforcing trends in ways that may fuel volatility or enable subtle forms of exploitation. Accountability and liability are also fraught issues: in a financial ecosystem increasingly mediated by algorithms, tracing responsibility among AI developers, service providers, and financial institutions is a legal and ethical minefield. The risks of non-adoption are not insignificant either—institutions reluctant or unable to integrate AI may lose competitiveness, worsening the digital divide between large and small players.
Perhaps the most novel warning is the risk of unintended collusion among AI systems: independent algorithms, each optimizing for profit, might tacitly align behaviours that sustain supra-competitive prices or distort markets, creating outcomes akin to cartelization without human intervention. Added to this are escalating cybersecurity vulnerabilities: AI models themselves can be hacked, poisoned, or manipulated through adversarial inputs, exposing both institutions and consumers to fraud and theft. Outsourcing to third-party AI providers compounds these vulnerabilities, introducing risks related to regulatory compliance, data protection, and operational dependency…. Taken together, these risks underscore the need for vigilance, resilience, and above all, a robust multi-dimensional regulatory framework—precisely what the RBI has sought to anticipate through the FREE-AI initiative.
The Report acknowledges that India’s existing legal framework, including the Information Technology Act, 2000 and RBI’s various sectoral guidelines, provides a foundation for AI governance.
The FREE-AI Report situates its recommendations within India’s evolving legal and regulatory architecture, recognizing that while the Information Technology Act, 2000 and a range of RBI’s sectoral guidelines have laid the foundation for digital governance, the pace of AI innovation demands sharper, AI-specific interventions. The report stresses that existing frameworks, though robust in parts, were not designed with the complexity of machine learning, algorithmic opacity, and automated decision-making in mind. As a result, they require deliberate recalibration to ensure that financial stability, consumer protection, and market integrity are preserved in an AI-driven economy.
The Cybersecurity Framework, 2016 is another key area requiring augmentation. While it presently mandates resilience against cyber threats, the emergence of AI-specific vulnerabilities, such as adversarial attacks, model poisoning, and data manipulation, introduces novel risks that traditional frameworks cannot adequately address. The FREE-AI Report urges regulators to expand the cybersecurity architecture to explicitly cover these threats, requiring continuous stress-testing of AI models, the creation of red-teaming protocols, and enhanced monitoring of adversarial behaviour in real time. This is particularly important in high-volume financial transactions, where a single compromised model could propagate systemic disruptions.
Consumer protection is also a recurring theme. The Customer Service Circular, 2015, which governs standards for fairness, transparency, and grievance redress, needs to evolve to reflect the reality of AI-driven decisions. The report argues for the creation of explicit mechanisms that allow customers to contest or appeal automated outcomes, such as loan denials or credit scoring decisions. In practice, this would mean obligating institutions to provide “explainability reports” in accessible language, enabling individuals to understand the rationale behind AI-driven determinations. This measure not only enhances transparency but also anchors trust in financial AI systems.
Similarly, the Fraud Risk Management Directions, 2024 offer a timely opportunity to embed AI both as a tool and as a subject of oversight. The report supports the use of AI in fraud detection, noting its ability to identify patterns across vast datasets far more effectively than human auditors. However, it cautions that such systems must undergo rigorous testing for bias, accuracy, and false positives, lest they unfairly target certain demographic groups or miss emerging threats. Regulatory amendments should therefore require financial institutions to adopt a dual approach—leveraging AI’s predictive power while subjecting its outputs to independent validation and periodic audit.
Finally, the report turns to the Outsourcing of IT Services Directions, 2023, which are already designed with the digital ecosystem in mind but require fine-tuning for AI. These directions should obligate IT service providers to disclose when AI is integrated into their solutions, conduct AI-specific risk assessments, and report the results to financial institutions and regulators. This ensures that institutions are not blindsided by “black box” technologies buried within vendor services. By codifying such obligations, regulators reinforce the idea that outsourcing does not equate to the outsourcing of responsibility.
Across these targeted amendments, the FREE-AI Report reiterates a central principle: accountability cannot be diluted by automation. Whether AI is deployed internally or via third-party providers, financial institutions must remain ultimately responsible for the outcomes, decisions, and risks that flow from these systems. This philosophy anchors the proposed reforms, striking a balance between encouraging innovation and ensuring that AI in finance develops within a framework of trust, fairness, and resilience.
At the heart of the FREE-AI framework lies the philosophy of the Seven Sutras, a set of guiding principles that define the ethical compass for AI adoption in India’s financial sector. These sutras are not mere rhetorical commitments but carefully articulated values meant to ensure that the pursuit of technological efficiency does not come at the expense of public trust, fairness, or accountability.
The first and most foundational sutra emphasizes that PUBLIC TRUST is the foundation of any financial AI system – without it, no degree of innovation can succeed. Closely tied to this is the PRINCIPLE OF HUMAN AUTHORITY, which ensures that individuals retain the power to override automated decisions, safeguarding autonomy in an era of algorithmic governance. The framework further emphasizes that INNOVATION SHOULD BE ENCOURAGED RATHER THAN RESTRAINED, provided it delivers social benefit and is tempered by a careful assessment of risks. EQUITY remains central: AI must actively promote fairness and inclusion rather than replicate or intensify structural biases in lending, credit scoring, or financial access. ACCOUNTABILITY is non-transferable i.e., the financial institutions must remain responsible for all AI-driven outcomes, even when technologies are outsourced or automated. Complementing these are DESIGN-CENTRIC COMMITMENTS. The systems must be inherently understandable and transparent, ensuring explainability for regulators, institutions, and consumers alike. Finally, the PRINCIPLE OF SAFETY AND RESILIENCE mandates that AI models be robust against both physical disruptions and cyber threats, built with sustainability and long-term security in mind. Collectively, these Seven Sutras provide the ethical scaffolding for 26 concrete recommendations, organized into six strategic pillars, translating high-level ideals into actionable pathways for responsible AI deployment in finance.
The FREE-AI Committee structures its 26 recommendations under six strategic pillars, carefully balancing the twin imperatives of fostering innovation and ensuring safeguards within India’s financial system. For detailed analysis of the recommendations, please refer the FREE-AI report of RBI on AI adoption by Financial Sector Posted on August 14, 2025 by NAAVI (Vijayashankar Na) at https://www.naavi.org/wp/the-free-ai-report-of-rbi-on-ai-adoption-by-financial-sector/
The first pillar, Infrastructure, underscores the importance of treating financial sector data as part of the nation’s digital public infrastructure. By integrating this with a repository for trustworthy, indigenous AI models, the framework seeks to build solutions that are rooted in Indian realities. To reduce privacy risks, the report also calls for the provision of anonymized datasets that can be used for training without compromising individual rights. This infrastructure-first approach ensures that innovation is not dependent on fragmented or opaque data sources but is anchored in transparency and public trust.
The second pillar, Policy, highlights the need for regulatory agility in keeping pace with technological change. Central to this is the idea of an AI Innovation Sandbox, a controlled environment where financial institutions and startups can experiment with AI applications under regulatory supervision. This approach enables learning-by-doing without jeopardizing market stability. Complementing this is the proposal for adaptive regulatory policies that can evolve with technological advances, rather than being frozen in time. To further accelerate India’s self-reliance in AI, the report recommends the creation of a dedicated AI development fund focused on India-specific solutions, ensuring that domestic challenges are addressed with homegrown innovations.
The third pillar, Capacity, addresses the human capital dimension of AI adoption. The report emphasizes that institutional readiness is as important as technological readiness. It calls for AI literacy programs targeted at board members and senior leadership in financial institutions, ensuring that strategic decisions are made with a clear understanding of both risks and opportunities. Regulators, too, must be trained to oversee AI adoption effectively, equipping them with the technical skills necessary to scrutinize algorithms, assess bias, and enforce compliance. This dual focus on institutions and regulators creates a shared foundation of competence.
The fourth pillar, Governance, translates principles into organizational responsibility. The report proposes that every financial institution be mandated to adopt a board-approved AI policy, formally embedding AI governance into corporate oversight structures. At the regulatory level, it calls for the RBI to issue a consolidated AI guidance document, which would establish uniform standards across the financial sector and prevent a patchwork of inconsistent practices. This pillar reinforces the idea that governance is not an afterthought but an intrinsic part of AI deployment.
The fifth pillar, Protection, is centered on consumer rights and systemic resilience. The recommendations require clear disclosure whenever customers interact with AI systems, ensuring transparency and informed consent. Cybersecurity protocols must be significantly strengthened to address emerging AI-specific threats, including adversarial attacks and data poisoning. Equally important is the development of AI-specific consumer grievance redressal mechanisms, giving individuals a way to contest automated outcomes and safeguarding trust in the financial system.
Finally, the sixth pillar, Assurance, provides mechanisms for accountability and long-term resilience. This includes the implementation of AI audit frameworks to independently verify the fairness, accuracy, and security of deployed models. Product approval processes, traditionally limited to financial instruments, should be expanded to cover AI models as well, ensuring that risks are assessed before large-scale deployment. Business continuity plans must also be adapted to account for AI model degradation, acknowledging that algorithms, like physical infrastructure, require maintenance and contingency planning…. Together, these six pillars offer more than a policy roadmap—they provide a carefully balanced architecture that nurtures innovation while creating a robust safety net for systemic stability, consumer protection, and ethical integrity in the age of financial AI.
The FREE-AI Report does not confine its vision to system-wide reforms but also looks into sector-specific implications, particularly for micro, small, and medium enterprises (MSMEs) and regulatory technology providers (RegTechs).
For MSMEs, which form the backbone of India’s economy yet often struggle with limited access to affordable credit, AI emerges as a potential game-changer. AI-driven credit assessment tools can process alternative data sources, such as digital payment histories, e-commerce transactions, or supply chain records, to build more accurate and inclusive risk profiles. This can help overcome the limitations of traditional credit scoring, which often disadvantages smaller enterprises due to thin or incomplete financial histories. The integration of AI with platforms like the Open Network for Digital Commerce (ONDC) and the Open Credit Enablement Network (OCEN) further amplifies these possibilities. By providing fairer, more transparent, and data-driven assessments, AI can enable small businesses to gain visibility in digital marketplaces, secure timely financing, and participate more fully in India’s formal economy. In this sense, AI does not merely promise efficiency but opens a pathway to structural empowerment for enterprises that have historically been underserved.
On the regulatory side, the Report positions RegTechs as indispensable allies in building a resilient AI ecosystem for finance. Regulatory technology providers can leverage AI to design tools that automate compliance checks, detect anomalies, and enhance transparency in real time, reducing costs for financial institutions while increasing regulatory oversight. The Report specifically notes that aligning these efforts with the FACE Code of Conduct can provide a consistent ethical and operational framework for RegTech adoption. This alignment not only facilitates smoother integration of RegTech solutions with existing financial infrastructure but also strengthens consumer protection by embedding fairness and accountability into compliance processes.
The RBI’s FREE-AI Report is a strategic blueprint for balancing innovation with responsibility in India’s financial sector. By laying down seven ethical Sutras and six strategic pillars, the framework seeks to ensure that AI adoption in finance is fair, transparent, accountable, and resilient.
For financial institutions, this means rethinking AI governance structures, reviewing outsourcing agreements, building AI inventories, and embedding fairness audits into AI-driven decisions. For consumers, it promises greater transparency and protections when engaging with AI systems. For the broader economy, it paves the way for AI-driven financial inclusion and sustainable innovation.
Ultimately, if implemented in both letter and spirit, FREE-AI could position India as a global leader in responsible AI adoption, creating a financial ecosystem where cutting-edge innovation thrives without compromising public trust.
All of us are aware of GPT, the Generative Pre-Trained Transformer, a system where you give an input and the system will generate a new text or picture or audio or video which should normally be more meaningful than the “Prompt Text”.
IDPS 2025 is an event where the objective is to transform the current knowledge of attendees to a distinctly heightened/elevated status so that post IDPS 2025, they will be an enlightened lot.
A GPT output depends on the Pre-Training and we hope that we will try to address this requirement through the experienced speakers who will share their thoughts.
We do anticipate the GPT-IDPS to hallucinate and exhibit creative but arguable thoughts. But since this is an academic seminar, we presume that there will be enough guard rails and Kill switches to prevent any adverse impact on the society.
The last session in the September 17 event that is happening in Bengaluru is a panel discussion which I will be moderating with the session theme : “Sectoral Implication of the DPDPA”. It will have four speakers representing four different stake holders. Mr Jason Joseph representing FINTECH industry, Mr Kaustub Ghosh representing the Health sector, Mr Rushab Pinesh Mehta, representing the DPO community and Ms Krithi Shetty representing the PET development sector.
The attendees will have the freedom to raise their prompts on any of these “Models” and get their GPT outputs. As a moderator, I will both be raising my own prompts and also act as a guardrail and a kill switch if the discussions go off track.
Be prepared for a lengthy session which may extend beyond the scheduled closure time but we shall endeavour to give you value for the time you spend.
So… Be a Prompt engineer and send us your prompts in advance… or raise them by being present there without fail.
When I say “Be Present without fail”, I am reminded of the last scene of the famous film “Santa Tukaram” which has come in many languages where Tukaram is being taken to haven and people come and tell her wife that a Chariot has come form haven to take Tukaram and she should come immediately. She however is so engrossed in her day to day work that she misses the opportunity to witness the event.”
Some of you may think you have attended may conferences and this is one another in the line. Think twice …Dont be like Jijabai and lose an opportunity to witness your own enlightenment on how to meet the DPDPA Challenge in the AI era.
Register today if you have not done so..here:
Naavi