The dangers of BYOAi

Every day we get some new development in the AI world. We are all enthusiastic about the potential of AI in increasing the productivity of our  organizations. Many SMEs/MSMEs and perhaps even the bigger organizations are restructuring their manpower  to use AI for reducing the costs. Some believe that AI agentic force can replace whole teams of employees for a given task.

The capability of AI is certainly visible in accomplishing some of our routine tasks in fraction of seconds.

However another risk which we are viewing is the tendency of some of the employees to jump the gun and start using AI tools for improving their personal productivity creating their own personal AI agents. Some employers may be encouraging this and some may not even be aware.

This BYOAi or Bring your own AI tendency which is some times referred to as Shadow AI is a new threat vector for organizations.

While we at  FDPPI are launching DGPSI-AI as an extended framework of DGPSI to assist organizations to mitigate the AI risk, it is necessary to first appreciate the extent of AI risks that is silently overcoming us.

In a recent compilation by an  AI enthusiast Mr Damien R Charlton, more than 358 legal cases involving AI hallucinations were tracked . This included 227 cases in USA, 28 each in Israel and Australia. At a  time when many were arguing that Courts can be replaced with AI and the AI tool is more honest than the dishonest Judiciary, the recent developments in observed hallucinations and rogue behaviour of AI have driven home a sense of caution.

A detailed analysis of these 358 cases is required to be attempted separately. But monetary sanctions have been indicated in many cases though the amount is only in thousands and not reached the millions and billions of dollars seen in GDPR and Competition Acts around the world. There have been public reprimand and warnings in most cases.

The highest penalty appeared to have been levied in the “Crypto open patent Alliance v Wright” amounting to 100,000 GBP stating “documents, which .. bore the stamp of having been written using an AI engine, contained a series of falsehoods,”

There were several other penalties such as  GBP 24727 imposed in the Bandla V Solicitors Regulation Authority  ,( UK High Court, 13th May 2025) , USD 31100 in USD in Lacey V State Farm General Insurance  (California District Court, 6th may 2025)  7925  In re Boy, Appellate, ( Court of Illinois, July 21, 2025) both for filing fabricated case laws.

These indicate that AI does lie and fabricate outputs and develops content which is not reliable in responsible usage. Hence placing reliance on  AI is extremely risky and replacing humans with AI an unwise move.

It is for this reason that DGPSI-AI considers AI risk is an “Unknown” risk and should be considered as Significant risk. All users of AI for personal data processing should be considered as “Significant Data Fiduciaries”. They need to designate DPOs, Do a DPIA and organize an annual data audit.

Considering these developments and unstoppable growth of AI, data auditors in India need to equip themselves with not only the knowledge of DPDPA but also of AI to some extent atleast to detect the use of AI and collect evidence of human oversight, possible hallucination etc. The data auditors need to also verify if any of the employees or their own use AI. In the Ethical declarations signed by employees, a disclosure of such usage should also be considered mandatory.

Naavi

 

Posted in Privacy | Leave a comment

IDPS 2025 Curtain Raiser and Master Class on DPDPA and AI for Business Managers

FDPPI is launching the IDPS 2025, the flagship event of FDPPI on September 17, 2025

As a prelude to the conference and to ensure that all participants will get refreshed on on the underlying legal and technical background information, FDPPI is organizing a complimentary virtual program “Master Class on DPDPA and Introduction of AI to business Managers” tomorrow.

The program for September 12 is as follows.

Joining link has already been forwarded to all those who have registered for IDPS 2025-September 17 through the registration process.

If any others would like to register for IDPS 2025 now, they can do so at the above link. If any body wants to join the September 12th event as a special guest of FDPPI, they may contact us with their request.

We look forward to interacting with you both on September 12th and September 17. Event of September 17 will be a physical event which will happen in the MSR Institute’s auditorium and will be co-hosted by MSR School of Law and supported by FICCI.

Naavi

Our partners for IDPS 2025

Posted in Privacy | Leave a comment

RBI Directions on Implementation of AI in Financial Services

(P.S: This is a guest post from  Mr. M.G.Kodandaram, IRS, Advocate)

Introduction

Artificial Intelligence (AI) is rapidly transforming the global financial sector, from automating credit assessments and fraud detection to enabling hyper-personalized financial services. For India, with its diverse population, rapidly growing fintech ecosystem, and robust digital public infrastructure, the potential of AI is particularly significant. But, with innovation comes a host of challenges – bias in algorithms, risks of systemic instability, questions of liability, and concerns over consumer protection.

Recognizing both the potential and the perils, the Reserve Bank of India (RBI) has taken a decisive step. As a starting point, the RBI conducted extensive surveys across regulated entities (REs) such as banks, NBFCs, and fintechs. The survey revealed that 20.8% of respondents are already deploying AI systems, primarily in customer support, sales, credit underwriting, and cybersecurity. At the same time, a striking 67% of entities expressed interest in exploring AI use cases.

This dual reality highlights India’s financial sector at an inflection point: a substantial number of institutions are experimenting with AI, while the majority remain in exploratory phases. The RBI thus saw an opportunity to frame a forward-looking, risk-sensitive regulatory framework that both encourages innovation and safeguards systemic integrity.

On 13 August 2025, it released the Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI Report)1, a comprehensive policy blueprint developed by the FREE-AI Committee constituted in December 2024. The Committee, chaired by Professor Pushpak Bhattacharya of IIT Bombay, was tasked with studying AI adoption in India’s financial services sector, identifying key risks, and recommending a structured regulatory response.

This article provides a detailed analysis of the RBI’s FREE-AI framework, its guiding principles, strategic recommendations, and implications for financial institutions, consumers, MSMEs, RegTechs, and the broader Indian economy. It also situates the framework within global regulatory developments, assessing how India’s approach aligns with international best practices.

Opportunities in AI for Finance

The FREE-AI Report foresees AI as a transformative force in India’s financial sector, opening new frontiers of innovation, inclusion, and efficiency. It identifies a diverse spectrum of opportunities, beginning with financial inclusion: AI-driven multilingual and multimodal digital platforms, combined with advanced credit assessment tools, could extend banking and lending services to millions of underserved citizens who remain outside the formal financial system. A major thrust lies in integrating AI with India’s robust digital public infrastructure – UPI, Aadhaar, ONDC, and OCEN – where machine learning could personalize service delivery, streamline credit flows, and expand access across urban and rural markets alike.

Another vision is the call for indigenous AI development: creating machine learning models trained on Indian languages, cultural contexts, and socio-economic realities, ensuring solutions that are not only technologically advanced but also socially relevant. The report also points to AI agents that can operate as financial assistants, in areas like ‘comparing loan offers in real time, managing consumer accounts, automating compliance, and executing secure transactions’, thus empowering individuals with greater financial literacy and autonomy. Looking further ahead, synergies between AI and emerging technologies such as quantum computing hold the promise of revolutionizing financial modelling, portfolio optimization, and systemic risk analysis, placing India at the forefront of global financial innovation. Together, these possibilities suggest a leap towards a more inclusive, competitive, and technologically resilient financial ecosystem.

However, the FREE-AI Report tempers this optimism with a careful mapping of risks that could derail progress if left unchecked. Algorithmic bias, arising from skewed datasets, could hardwire inequalities into credit allocation and access to services, undermining the very goal of inclusion.

The report warns of the amplification of inaccuracies: in high-frequency trading or large-scale financial transactions, even small errors embedded in AI models could multiply rapidly, producing systemic shocks. A further concern lies in homogenization, where overreliance on similar AI models could reduce diversity in financial strategies, making markets less resilient to shocks. Equally troubling is the spectre of AI-driven market manipulation – systems reinforcing trends in ways that may fuel volatility or enable subtle forms of exploitation. Accountability and liability are also fraught issues: in a financial ecosystem increasingly mediated by algorithms, tracing responsibility among AI developers, service providers, and financial institutions is a legal and ethical minefield. The risks of non-adoption are not insignificant either—institutions reluctant or unable to integrate AI may lose competitiveness, worsening the digital divide between large and small players.

Perhaps the most novel warning is the risk of unintended collusion among AI systems: independent algorithms, each optimizing for profit, might tacitly align behaviours that sustain supra-competitive prices or distort markets, creating outcomes akin to cartelization without human intervention. Added to this are escalating cybersecurity vulnerabilities: AI models themselves can be hacked, poisoned, or manipulated through adversarial inputs, exposing both institutions and consumers to fraud and theft. Outsourcing to third-party AI providers compounds these vulnerabilities, introducing risks related to regulatory compliance, data protection, and operational dependency…. Taken together, these risks underscore the need for vigilance, resilience, and above all, a robust multi-dimensional regulatory framework—precisely what the RBI has sought to anticipate through the FREE-AI initiative.

Proposed Amendments to Existing RBI Regulations

The Report acknowledges that India’s existing legal framework, including the Information Technology Act, 2000 and RBI’s various sectoral guidelines, provides a foundation for AI governance.

The FREE-AI Report situates its recommendations within India’s evolving legal and regulatory architecture, recognizing that while the Information Technology Act, 2000 and a range of RBI’s sectoral guidelines have laid the foundation for digital governance, the pace of AI innovation demands sharper, AI-specific interventions. The report stresses that existing frameworks, though robust in parts, were not designed with the complexity of machine learning, algorithmic opacity, and automated decision-making in mind. As a result, they require deliberate recalibration to ensure that financial stability, consumer protection, and market integrity are preserved in an AI-driven economy.

The Cybersecurity Framework, 2016 is another key area requiring augmentation. While it presently mandates resilience against cyber threats, the emergence of AI-specific vulnerabilities, such as adversarial attacks, model poisoning, and data manipulation, introduces novel risks that traditional frameworks cannot adequately address. The FREE-AI Report urges regulators to expand the cybersecurity architecture to explicitly cover these threats, requiring continuous stress-testing of AI models, the creation of red-teaming protocols, and enhanced monitoring of adversarial behaviour in real time. This is particularly important in high-volume financial transactions, where a single compromised model could propagate systemic disruptions.

Consumer protection

Consumer protection is also a recurring theme. The Customer Service Circular, 2015, which governs standards for fairness, transparency, and grievance redress, needs to evolve to reflect the reality of AI-driven decisions. The report argues for the creation of explicit mechanisms that allow customers to contest or appeal automated outcomes, such as loan denials or credit scoring decisions. In practice, this would mean obligating institutions to provide “explainability reports” in accessible language, enabling individuals to understand the rationale behind AI-driven determinations. This measure not only enhances transparency but also anchors trust in financial AI systems.

Similarly, the Fraud Risk Management Directions, 2024 offer a timely opportunity to embed AI both as a tool and as a subject of oversight. The report supports the use of AI in fraud detection, noting its ability to identify patterns across vast datasets far more effectively than human auditors. However, it cautions that such systems must undergo rigorous testing for bias, accuracy, and false positives, lest they unfairly target certain demographic groups or miss emerging threats. Regulatory amendments should therefore require financial institutions to adopt a dual approach—leveraging AI’s predictive power while subjecting its outputs to independent validation and periodic audit.

Finally, the report turns to the Outsourcing of IT Services Directions, 2023, which are already designed with the digital ecosystem in mind but require fine-tuning for AI. These directions should obligate IT service providers to disclose when AI is integrated into their solutions, conduct AI-specific risk assessments, and report the results to financial institutions and regulators. This ensures that institutions are not blindsided by “black box” technologies buried within vendor services. By codifying such obligations, regulators reinforce the idea that outsourcing does not equate to the outsourcing of responsibility.

Across these targeted amendments, the FREE-AI Report reiterates a central principle: accountability cannot be diluted by automation. Whether AI is deployed internally or via third-party providers, financial institutions must remain ultimately responsible for the outcomes, decisions, and risks that flow from these systems. This philosophy anchors the proposed reforms, striking a balance between encouraging innovation and ensuring that AI in finance develops within a framework of trust, fairness, and resilience.

The Seven Sutras: Guiding Principles of FREE-AI

At the heart of the FREE-AI framework lies the philosophy of the Seven Sutras, a set of guiding principles that define the ethical compass for AI adoption in India’s financial sector. These sutras are not mere rhetorical commitments but carefully articulated values meant to ensure that the pursuit of technological efficiency does not come at the expense of public trust, fairness, or accountability.

The first and most foundational sutra emphasizes that PUBLIC TRUST is the foundation of any financial AI system – without it, no degree of innovation can succeed. Closely tied to this is the PRINCIPLE OF HUMAN AUTHORITY, which ensures that individuals retain the power to override automated decisions, safeguarding autonomy in an era of algorithmic governance. The framework further emphasizes that INNOVATION SHOULD BE ENCOURAGED RATHER THAN RESTRAINED, provided it delivers social benefit and is tempered by a careful assessment of risks. EQUITY remains central: AI must actively promote fairness and inclusion rather than replicate or intensify structural biases in lending, credit scoring, or financial access. ACCOUNTABILITY is non-transferable i.e., the financial institutions must remain responsible for all AI-driven outcomes, even when technologies are outsourced or automated. Complementing these are DESIGN-CENTRIC COMMITMENTS. The systems must be inherently understandable and transparent, ensuring explainability for regulators, institutions, and consumers alike. Finally, the PRINCIPLE OF SAFETY AND RESILIENCE mandates that AI models be robust against both physical disruptions and cyber threats, built with sustainability and long-term security in mind. Collectively, these Seven Sutras provide the ethical scaffolding for 26 concrete recommendations, organized into six strategic pillars, translating high-level ideals into actionable pathways for responsible AI deployment in finance.

Six Strategic Pillars

The FREE-AI Committee structures its 26 recommendations under six strategic pillars, carefully balancing the twin imperatives of fostering innovation and ensuring safeguards within India’s financial system. For detailed analysis of the recommendations, please refer the FREE-AI report of RBI on AI adoption by Financial Sector Posted on August 14, 2025 by NAAVI (Vijayashankar Na) at https://www.naavi.org/wp/the-free-ai-report-of-rbi-on-ai-adoption-by-financial-sector/

The first pillar, Infrastructure, underscores the importance of treating financial sector data as part of the nation’s digital public infrastructure. By integrating this with a repository for trustworthy, indigenous AI models, the framework seeks to build solutions that are rooted in Indian realities. To reduce privacy risks, the report also calls for the provision of anonymized datasets that can be used for training without compromising individual rights. This infrastructure-first approach ensures that innovation is not dependent on fragmented or opaque data sources but is anchored in transparency and public trust.

The second pillar, Policy, highlights the need for regulatory agility in keeping pace with technological change. Central to this is the idea of an AI Innovation Sandbox, a controlled environment where financial institutions and startups can experiment with AI applications under regulatory supervision. This approach enables learning-by-doing without jeopardizing market stability. Complementing this is the proposal for adaptive regulatory policies that can evolve with technological advances, rather than being frozen in time. To further accelerate India’s self-reliance in AI, the report recommends the creation of a dedicated AI development fund focused on India-specific solutions, ensuring that domestic challenges are addressed with homegrown innovations.

The third pillar, Capacity, addresses the human capital dimension of AI adoption. The report emphasizes that institutional readiness is as important as technological readiness. It calls for AI literacy programs targeted at board members and senior leadership in financial institutions, ensuring that strategic decisions are made with a clear understanding of both risks and opportunities. Regulators, too, must be trained to oversee AI adoption effectively, equipping them with the technical skills necessary to scrutinize algorithms, assess bias, and enforce compliance. This dual focus on institutions and regulators creates a shared foundation of competence.

The fourth pillar, Governance, translates principles into organizational responsibility. The report proposes that every financial institution be mandated to adopt a board-approved AI policy, formally embedding AI governance into corporate oversight structures. At the regulatory level, it calls for the RBI to issue a consolidated AI guidance document, which would establish uniform standards across the financial sector and prevent a patchwork of inconsistent practices. This pillar reinforces the idea that governance is not an afterthought but an intrinsic part of AI deployment.

The fifth pillar, Protection, is centered on consumer rights and systemic resilience. The recommendations require clear disclosure whenever customers interact with AI systems, ensuring transparency and informed consent. Cybersecurity protocols must be significantly strengthened to address emerging AI-specific threats, including adversarial attacks and data poisoning. Equally important is the development of AI-specific consumer grievance redressal mechanisms, giving individuals a way to contest automated outcomes and safeguarding trust in the financial system.

Finally, the sixth pillar, Assurance, provides mechanisms for accountability and long-term resilience. This includes the implementation of AI audit frameworks to independently verify the fairness, accuracy, and security of deployed models. Product approval processes, traditionally limited to financial instruments, should be expanded to cover AI models as well, ensuring that risks are assessed before large-scale deployment. Business continuity plans must also be adapted to account for AI model degradation, acknowledging that algorithms, like physical infrastructure, require maintenance and contingency planning…. Together, these six pillars offer more than a policy roadmap—they provide a carefully balanced architecture that nurtures innovation while creating a robust safety net for systemic stability, consumer protection, and ethical integrity in the age of financial AI.

Sectoral Implications

The FREE-AI Report does not confine its vision to system-wide reforms but also looks into sector-specific implications, particularly for micro, small, and medium enterprises (MSMEs) and regulatory technology providers (RegTechs).

For MSMEs

For MSMEs, which form the backbone of India’s economy yet often struggle with limited access to affordable credit, AI emerges as a potential game-changer. AI-driven credit assessment tools can process alternative data sources, such as digital payment histories, e-commerce transactions, or supply chain records, to build more accurate and inclusive risk profiles. This can help overcome the limitations of traditional credit scoring, which often disadvantages smaller enterprises due to thin or incomplete financial histories. The integration of AI with platforms like the Open Network for Digital Commerce (ONDC) and the Open Credit Enablement Network (OCEN) further amplifies these possibilities. By providing fairer, more transparent, and data-driven assessments, AI can enable small businesses to gain visibility in digital marketplaces, secure timely financing, and participate more fully in India’s formal economy. In this sense, AI does not merely promise efficiency but opens a pathway to structural empowerment for enterprises that have historically been underserved.

On the regulatory side, the Report positions RegTechs as indispensable allies in building a resilient AI ecosystem for finance. Regulatory technology providers can leverage AI to design tools that automate compliance checks, detect anomalies, and enhance transparency in real time, reducing costs for financial institutions while increasing regulatory oversight. The Report specifically notes that aligning these efforts with the FACE Code of Conduct can provide a consistent ethical and operational framework for RegTech adoption. This alignment not only facilitates smoother integration of RegTech solutions with existing financial infrastructure but also strengthens consumer protection by embedding fairness and accountability into compliance processes.

The Way Forward

The RBI’s FREE-AI Report is a strategic blueprint for balancing innovation with responsibility in India’s financial sector. By laying down seven ethical Sutras and six strategic pillars, the framework seeks to ensure that AI adoption in finance is fair, transparent, accountable, and resilient.

For financial institutions, this means rethinking AI governance structures, reviewing outsourcing agreements, building AI inventories, and embedding fairness audits into AI-driven decisions. For consumers, it promises greater transparency and protections when engaging with AI systems. For the broader economy, it paves the way for AI-driven financial inclusion and sustainable innovation.

Ultimately, if implemented in both letter and spirit, FREE-AI could position India as a global leader in responsible AI adoption, creating a financial ecosystem where cutting-edge innovation thrives without compromising public trust.

 

Posted in Privacy | Leave a comment

September 17, 2025, is the day of Transformation for Data Protection Professionals

All of us are aware of GPT, the Generative Pre-Trained Transformer, a system where you give an input and the system will generate a new text or picture or audio or video which should normally be more meaningful than the “Prompt Text”.

IDPS 2025 is an event where the objective is to transform the current knowledge of attendees to a distinctly heightened/elevated status so that post IDPS 2025, they will be an enlightened lot.

A GPT output depends on the Pre-Training and we hope that we will try to address this requirement through the experienced speakers who will share their thoughts.

We do anticipate the GPT-IDPS to hallucinate and exhibit creative but arguable thoughts. But since this is an academic seminar, we presume that  there will be enough guard rails and Kill switches to prevent any adverse impact on the society.

The last session in the September 17 event that is happening in Bengaluru is a panel discussion which I will be moderating with the session theme : “Sectoral Implication of the DPDPA”. It will have four speakers representing four different stake holders. Mr Jason Joseph representing FINTECH industry, Mr Kaustub Ghosh representing the  Health sector, Mr Rushab Pinesh Mehta, representing the DPO community and Ms Krithi Shetty representing the PET development sector.

The attendees will have the freedom to raise their prompts on any of these “Models” and get their GPT outputs. As a moderator, I will both be raising my own prompts  and also act as a guardrail and a kill switch if the discussions go off track.

Be prepared for a lengthy session which may extend beyond the scheduled closure time but we shall endeavour to give you value for the time you spend.

So… Be a Prompt engineer and send us your prompts in advance… or raise them by being present there without fail.

When I say “Be Present without fail”, I am reminded of the last scene of the famous film “Santa Tukaram” which has come in many languages where Tukaram is being taken to haven and people come and tell her wife that a Chariot has come form haven to take Tukaram and she should come immediately. She however is so engrossed in her day to day work that she misses the opportunity to witness the event.”

Some of you may  think you have attended may conferences and this is one another in the line. Think twice …Dont be like Jijabai and lose an opportunity to witness your own enlightenment on how to meet the DPDPA Challenge in the AI era.

Register today if you have not done so..here: 

Naavi

Posted in Privacy | Leave a comment

Get Ready for the Knowledge Conference..IDPS 2025

For FDPPI, IDPS 2025 is an annual pilgrimage into the world of Data Protection. The IDPS 2025 is a special event where multiple cities are participating in the event. On 17th September, IDPS 2025 will formally get launched  at Bangalore with the day long event at MS Ramaiah School of Law, Bengaluru.

The event will be inaugurated by honourable Justice Sri Subhash B Adi and former Judge at Karnataka High Court.

Dr Venugopal, former Vice-Chancellor of Bangalore  University  would present as a Guest of honour.

Sr Kuldeep Kumar Raina, Vice Chancellor of MS Ramaiah University will preside.

There will be four key panel discussions to follow which will be led by industry specialists.

The objective of the event is to focus on the impact of the twin challenges of DPDPA and AI by the industry.

I request professionals from different  industries to share their anticipated challenges in implementing DPDPA in the AI driven technology ecosystem. We will try to find out solutions and collate the views for presentation to the MeitY.

Share your thoughts through e-mail to naavi @fdppi.in or use the following Google form. https://forms.gle/5hueBWZtWiLK9WVVA

We would also love to have your short video or message on any of the related topics which can be presented during the event which can be sent to Naavi or any of the organizers.

Naavi

Posted in Privacy | Leave a comment

Thanks to Trump DPDPA implementation may be fast tracked

A couple of days back, we  had the news that Google has been fined $3.5 billion (Rs 31000 crores) under EU competition act because of their advertising policy. Earlier there have been multiple EU fines under GDPR which Google  has faced.

Following table indicates a total of fine of around 350 million euros under GDPR equivalent to around Rs 3600 crores.


Summary Table: GDPR Fines on Google

Year Amount Authority & Country Violation
2019 €50M CNIL, France Lack of transparency, invalid ad-consent mechanisms
2021 €90M (LLC) CNIL, France Cookie consent withdrawal made harder than acceptance
2021 €60M (Ireland) CNIL, France Same as above, via Ireland-based operations
2022 €150M CNIL, France Cookie refusal not as easy as acceptance

Meta has so far faced a fine of around Rs 20400 crores crores so far in the GDPR domain including the largest fine of 1.2 billion euros by Ireland Authority. Amazon has also faced fines of upto Rs 8200 crores so far under GDPR.

We are not clear about how  much of actual fine has been paid by these entities after litigation. One indication is provided below.


Summary Table: Paid vs. Pending

Company Prominent Fines Imposed Paid (Confirmed) Pending / Appealed
Google Multiple significant GDPR fines None confirmed Likely all under appeal or dispute
Amazon €746 M (Luxembourg) Not verified, but obligation upheld Appeal lost in March 2025
Meta ~€2.3 B total ~€687 M paid in 2022; ~51% of total so far €1.2 B and others still under appeal

Irrespective of the actual fines paid, it is clear that GDPR has provided enormous revenue potential to EU countries.

Indian DPDPA fines could be lot less than GDPR fines but if the DPB interprets the Rs 250 crore fine as “per instance of breach” or “Per type of breach”, the actual penalty could be far higher than Rs 250 crores per breach.

Mr Donald Trump has been gloating over the revenue he is likely to generate from the 50% tariff on India. He does not understand that if he is able to collect “Trillions” of dollars in tariff and make America rich, he has to collect it only from US citizens. If the US citizens donot buy Indian products, then he is unlikely to collect the tariff income. The reality could be that some products may still be bought by US consumers at current price+50% while some may be discarded.

While Indian suppliers may lose some of their export sales, they may be able to substitute the lost sales through more sales in India or with more export incentives. India can neutralize the effect of the US tariff if it wants, by introducing export incentives which the Government is  considering.

At the same time, Government  may also realize that Trump is threatening that he may block outsourcing of IT business to India.

We are not sure if Trump is serious about the  banning of  outsourcing to India and even if he imposes, whether the Companies accept it.  He can, if he wants, impose export duty on the US Companies to discourage outsourcing.

India should however factor in that US may commit its own harakiri by banning outsourcing to India.

In the meantime, India should consider imposing tariff on Google, Meta, Amazon and Microsoft on their revenue in India.

Now that our relationship with China is improving, we should find out means of living  without Microsoft, Google, Meta or Amazon.

We already have ONDC platform which can replace Amazon. We can live without Meta since it has no big value. We can replace WhatsApp with local apps. Google mail can be replaced. The AWS, Azure and Google Cloud services, are sensitive and if stopped suddenly, could destabilize the Indian economy. We need to work out a strategy to shift the cloud storage to indigenous platforms before it is too late.

    1. If Trump goes ahead with blocking  outsourcing or bringing pressure on the Tech Companies to stop their services or force a tariff collection, we need  to be ready to shift to alternatives including the Russian or Chinese services. Just as Trump has pushed India closer to China with his tariffs in some sectors, he will  be pushing us closer to China even in the IT field. If so, China would benefit but US would collapse.

If we are pushed to this level, it will be an all out Economic War and the consequences will be completely uncontrollable.

Assuming that we may not enter such drastic phase, we still need to prepare ourselves to collect some part of our lost revenue because of tariffs through DPDPA Fines or Competition act fines on these tech companies.

EU can provide us leads since all cases in which GDPR fines have been imposed also indicate the reasons under which India can also impose fines. DPB need not work too hard to find out the non compliance. They can just re-work on GDPR instances and apply it in the Indian scenario.

For this to happen, India needs to notify the DPDPA as early as possible.

I hope MeitY will take on this responsibility to support the Government in additional revenue generation by implementing the law quickly and starting to impose fines.

Naavi

 

Posted in Privacy | Leave a comment