Climate Change Impact on ISO 42001

(Refer article in News18.com)

It is observed that some time in 2023, an idea was adopted by ISO that Standard developers should incorporate and demonstrate their concern for Climate change while arriving at the standards. Accordingly ISO-Guide 84:2020 was also released. It appears that this requirement is being now added mechanically to all standards without justifying its relevance.

Accordingly, a standard like ISO 42001 meant as a Requirement Standard for Artificial Intelligence Management System (AIMS) in clause 4.1 (Understanding the organization and its context) adds a component “The organization shall determine climate change is a relevant issue”.

When a company implementing an AI or developing an AI and looking at this document for guidance and possible certification would wonder what its use of an AI algorithm has to do with “Climate Change”.

While we consider that this clause has crept into the standard following the blind implementation of a norm without considering the proportionality of the impact of such a suggestion, we still open up for debate this requirement in the context of some recent revelations on the climatic impact of the AI systems particularly using LLMs.

LLMs are the first AI systems adopted by most companies and hence the climatic impact of LLMs becomes a relevant case for certification of ISO 42001.

In the context of Crypto Currencies, we have discussed how the energy requirements of Bitcoin/Crypto currency mining could be detrimental to the society (Refer the Article: Mr Piyush Goyal and Mr R K Singh… Do you know how much energy goes into Bitcoins?), a similar concern has now surfaced on the consequential use of scarce water resources in the development of LLMs.

For example it is stated that

“A single LLM interaction may consume as much power as leaving a low-brightness LED lightbulb on for one hour.”—Alex de Vries, VU Amsterdam

If you go through the Business Today article “Every time you talk to ChatGPT it drinks 500ml of water; here’s why” the information is scary. It is stated that Open AI’s Chat GPT consumes 500 ml of water for every 5 to 50 prompts it answers according to researchers.

In India discussions have taken place on water consumption by Companies like Pepsi or Cocacola but the dimension of Water and Energy Consumption by AI systems both for development and usage makes one to sit back and think if there is a need to decelerate the growth of data centers to conserve water and energy resources.

An article published by associated Press recently quoting the 2022 information suggested that Microsoft’s data center water use  increased by 34% from 2021 to 2022. The company slurped up more than 1.7 billion gallons, or 6.4 billion liters, of water in the previous year, which is said to be enough to fill more than 2,500 Olympic-sized swimming pools. It was a similar story with Google, which reported a 20% spike  in its water consumption over the same timeframe. It is anybody’s guess what would be the situation in 2024 with ChaptGPT 4/5 and Bard/Gemini being in use.

A time has come for ISO 42001 auditors (Ed: Audit of ISO 42001 may perhaps be required to be done like ISO 27701 along with ISO 27001) to ask the question to their auditee organizations if it is possible to ignore the climatic impact of use of AI when an AIMS audit is undertaken.

The current discussions on regulation of AI is normally around Job loss, Human brain degradement, Explainability,Accountability, Bias control etc., but not very much on the climate impact or related issues such as carbon foot print. The EU act on AI may require that the “High Risk AI Systems” may be required to report report their energy consumption resource use and other impacts throgh out their systems life cycle.

India has to also incorporate this aspect in its proposed AI regulation. A Yale university report mentions that in Chile and Uruguay, protests have erupted over planned data centers that would tap drinking water reservoirs.

There was a time when Indian Government would run TV ads on “Stop the tap when you are shaving”. Now the new generation ads will be “Dont make a query in Chat GPT if you donot need it”. Probably water conservation should become part of the IT industry’s responsibilities.

We donot know if the recent drinking water shortage in Bengaluru city has any origin in the increased use of AI !

Let us keep this issue on the radar….

Reference Articles:

https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669

https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions#:~:text=Those%20will%20include%20standards%20for,electricity%20consumed%20by%20its%20calculations.

Posted in Cyber Law | Leave a comment

Guardians of Privacy

A brief interview of Naavi at the recent New Delhi Book fair at the Notion Press book stall.

Naavi

Posted in Cyber Law | Leave a comment

Section 69 Rules Amended: For Preservation of Digital Evidence

On October 27, 2009, Government had issued a notification under Section 69 of ITA 2000 titled Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

This rule had a clause 23 which stated as under:

Destruction of records of interception or monitoring or decryption of information.—

(1) Every record, including electronic records pertaining to such directions for interception or monitoring or decryption of information and of intercepted or monitored or decrypted information shall be destroyed by the security agency in every six months except in a case where such information is required, or likely to be required for functional requirements.

(2) Save as otherwise required for the purpose of any ongoing investigation, criminal complain or legal proceedings, the intermediary or person in-charge of computer resources shall destroy records pertaining to directions for interception of information within a period of two months of discontinuance of the interception or monitoring or decryption of such information and in doing so they shall maintain extreme secrecy.

Now the current gazette notification states as under:

The Indian express commented that the rules have been amended to broaden the powers of the centre to issue directions to destroy digital evidence and to allow the home secretary or other bureaucrats to issue directions to destroy digital records.

On a proper reading of the rule it appears that the insinuations of the Indian Express (which no doubt will soon be echoed by the Anti Government media) is wrong. The  rule suggests that the power to remove the information collected during an investigation process shall not be exercised by the security agency but by the authority which actually gave the permission or had the power to give permission for the monitoring. This is logical and appears to correct the possibility of misuse of the authority by the security agencies.

Under the rules, powers had been given exclusively to a “Competent Authority” to carry out the interception or monitoring or decryption of information. Any agency other than the “Competent Authority” indulging in such activity would become an “Unauthorized access” under Section 66.

To take care of unavoidable circumstances, it was provided that such orders may be issued by an officer not below the rank of a joint secretary duly authorized by the competent authority.

It was also provided that in case of emergencies and in remote areas where the obtaining of the permission from the “Competent Authority” or the “Designated Officer” was not feasible, the interception etc may be carried out with the prior approval of the Head or the second senior most officer of the security and law enforcement agency (hereinafter referred to as the said security agency) at the Central level and the officer authorised in this behalf, not below the rank of the inspector General of Police or an officer of equivalent rank, at the State or Union territory level;

In all cases where the delegated authority was exercised, such enforcing agency (Designated officer or the police officers etc) were required to inform in writing to the competent authority about the emergency and the action taken within 3 working days and obtain approval. If the approval was not available for 7 working days, the action of monitoring etc was expected to be terminated.

What the recent amendment states is that where such monitoring has commenced and certain data has been collected, the destruction of such data shall be done only with the instructions of the competent authority and not the security agency.

In other words, the security agency which is given the emergency powers to collect data is not permitted to play with it and destroy it when permission is denied by the competent authority.

This is therefore for “Preserving digital evidence” and not “Destroying digital evidence” as the news paper reports.

It is unfortunate that certain reporters of the media and the media themselves are ignorant and look at any action of the Government with coloured ideas. They must admit their ignorance and provide clarification. Otherwise this will be a “Fake News”.

I will not be surprised if this issue is taken to Court and some ignorant Judge perhaps in a High Court passes an order to stay the notification. A similar incident happened when Mumbai High Court gave a split verdict in the case of “Setting up of “Fake News Alert” by the Government to protect fake news about the Government being spread by the vested interests. The Court failed to understand the implications of the proposed amendments and the limited role of PIB in the context and declared that it was a freedom of press issue. In the past, in the celebrated Shreya Singhal Judgement Supreme Court itself displayed ignorance and gave a faulty judgement considering “Publication” as equivalent to “Messaging”.

I hope the news reporters understand such issues before they report.

One possibility is that State police in some occasions might  have collected some investigating details using the powers under Section 69 of ITA 2000 and may like to destroy it before the NIA takes over the  investigations. Such issues are now common in many states where there are opposition Governments. This amendment prevents the State level agencies exercising the powers as an emergency and later destroy the data if it is inconvenient to them. The amendment  therefore has to be welcome as tightening up of the rules.

Naavi

Posted in Cyber Law | Leave a comment

Disrupting the Disruptors

Whether it is on Privacy or Fintech Innovation or Cyber Laws, we are observing that all discussions in professional circles lead to DPDPA 2023 and Artificial Intelligence.

While Techies discuss AI as the new craze of Innovation and Disruption, the regulators and legal professionals keep warning about the dangers of AI and the need to put it on reigns.

The fact that Google has put a stop to its Google AI project, Mr Elon Musk has repeatedly warned about the dangers of AI, need to be kept in mind when we look at how to welcome AI into business.

Yesterday in a massive conference in Bangalore on AI in FINTECH organized by Razor Pay, the excitement of techies in the disruption caused by AI Technology was palpable. There was however one discreet warning about “Aggressive Juggad” taking on “Aggressive Regulator” from Dr Bharat Panchal who interestingly describes himself as the “Risky MonK”. Dr Padmanabhan, the past Executive  zdirector of RBI also referred to “Disruption of the Disruptors” by non compliance.

In the din of excitement of the day, the warnings would not have been noticed. The vague discussion on “Ethical AI”, is insufficient to address this issue of how “Hallucination”, “Bias”, “Intellectual Property Right and Privacy Right violation” in Machine learning are insufficient to meet the requirements.

FDPPI has been therefore working on how in its DGPSI (Data Governance and Protection Standard of India) framework for DPDPA 2023 compliance the issue of AI can be addressed.

This will be one of the discussions in the special one day training on “Implementation of DPDPA 2023 Compliance through DGPSI” being held in Bangalore on March 2nd at Fairfield Marriot, Bangalore.

Be there if you are interested…

Naavi

Posted in Cyber Law | Leave a comment

FDPPI Special Drive for DPO/DA training

FDPPI is conducting a series of training programs all over India to prepare the Indian Professionals to be Data Protection Officers and Data Auditors.

In the month of March 2o24, several one day programs have been scheduled in Mumbai, Ahmedabad, Kolkata, Nagpur and Bangalore for experienced data protection professionals requiring an in-depth discussion on implementation of DPDPA 2023.

The registrations for all the programs other than in Bangalore has been closed. Registration for the program in Bengaluru is open.

Register today for the program and Examination.

Naavi

Posted in Cyber Law | Leave a comment

Transactional Analysis applied to Artificial Intelligence behaviour.

As the world is trying to develop regulations for Artificial Intelligence and prevent Privacy Abuse, Copyright Abuse, irrational and unexplainable decisions etc., a question arises what exactly is the definition of “Artificial Intelligence”, when does a “Software” become “Artificial Intelligence” and whether the  known principles of Behavioural Science can be applied to Artificial Intelligence behaviour also. 

A software is a set of instructions that can be read by a device and converted into actionable instructions to peripherals. The software code is created by a human and fed into the system from time to time as “Updations and new Versions”. Each such modification is dictated by the developer’s learnings of the behaviour of the software vis-a-vis the expected utility of the software. In this scenario, the legal issues of the status of the software and the software developer is settled. Software is a tool developed by the developer for the benefit of the user. The user takes control of the software through a purchase or license contract and as the owner of the tool, is responsible for the consequences of its use. Hence when an automated decision creates harm to a person (User or any third party), the owner (Licensee or the developer) should bear the responsibility. This is clearly laid down in Indian law through Section 11 of ITA 2000.

Despite this, we are today discussing the legal consequences of the use of AI, whether its actions need to be regulated through a separate law and if so how?.

We are discussing the copyright issues when AI generates some literary work or even the software. In US, Courts have held that  when an idea is generated by an AI, it is not copyrightable.. (Thaler Vs Perlmutter). Even in India, music created by AI is not held copyrightable (Gramophone Company of India Ltd. v. Super Cassettes Industries Ltd. (2011)). Recently videos created by AI of deceased singer SPB has created a property rights issue on who owns the AI version of SPB.

Copyright or other Intellectual property laws are powerful international laws that are protected by treaties and hence are likely to prevail over other new generation legal issues raised by technology. The upholding of the concept that “Creativity” cannot be recognized in AI also to some extent destroys the argument that AI is a “Juridical entity” different from the “Software” which is accountable in the name of the developer.

The EU act adopts the definition of AI as 

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

Earlier definitions used terms like “Computer Systems that can perform human like tasks” such as “Seeing”, “hearing”, “Touching” etc and convert them into recordable experiences. 

In the current status of the industry, AI has developed into Generative AI algorithms and humanoid robots. In such use cases, the definition of AI is touching the concept of “Intuition”, “restraint”, “Discretion” etc which are attributable to a human intellect. 

For example,  human does not react the same way for every similar stimuli. Some times humans get angry and is not able to show discretion in action. Some times, they do. 

What is it to be Human Like in terms of behaviour?

“Software” and “Artificial Intelligence” are not two binary positions and there is no clear line of demarcation. However to be clear about the legal position of AI, it is necessary to have an understanding of what exactly is Artificial Intelligence and whether there is a proper legal definition when a “Software” becomes “Artificial Intelligence”. An AI algorithm normally is not able to show such  discretion.

Human can “Forget” and “Move On”. A Computer is not able to “Forget” and hence every action of it is a reflection of its previous learning.  Even if we build a model where the behaviour of AI changes statistically with each new experience, the human behaviour has an element of spontaneity that an AI misses.

Thus a software which is coded to change its future output based on the statistical analysis of new inputs by modifying its own code  created by a “original coder” of the Version Zero, is knocking at the doors of being called “Automated Decision making system with self learning ability”. This is often called Artificial Intelligence  based on Machine Learning technology.

The inputs to such system may come from sensors of camera or mike etc but are interpreted by the software that converts the binary inputs into some other form of sight and sound. 

This process is similar to the human brain system which also receives inputs from its sensory organs and processes it in the brain some times with reference to the earlier recorded experiences (Which we may call as prejudice).

But the difference between human intelligence and Artificial intelligence is that all human responses are not same. It varies on several known and unknown factors. If we try to remove this characteristic of human behaviour, we will be “De humanizing” the decisions in the society and convert the society into an artificial society. 

The objective of any law is to preserve the good qualities of the society and one such good quality is the unpredictability of human mind. The “Creativity” aspect of the software which often comes into discussion in IPR cases arise out of this need to prevent human character. We donot want all humans to be zombies. One impact of this can be seen in the Computer Games. If we are playing Tennis or Cricket or Golf on the computer, we know that a certain type of action on the key board results in a certain type of swing of the bat on the screen while in a real situation, a Sports person has many innovative ways of dealing with the same ball. It does not seem ideal that we remove this creativity and the beauty of uncertainty and make every short ball go for sixes while in reality many rank bad balls result in wickets.

In Generative AI, we have often seen a “Rogue” behaviour of the algorithm where it behaves “Mischievously” or  “Creatively”. Whether this rogue behaviour itself is  “Creativity” and is an indication that the “Software” has become “Human” because it  can make mistakes is a point to ponder.

The thought that emerges out of this discussion is that as long as a software is bound to a predictive nature of behaviour, it remains a software. But when the software is capable of behaving in an unpredictable manner, it is not  becoming “Sentient” but actually becoming “Human”.

A dilemma arises here. “To Err is Human” and hence one view is that unless a Computer learns to err, it cannot be called “Human like”. 

But if AI is allowed to “Err” then it will be losing the benefit of being a “Computer” where 2+2 is always 4. It is only the human mind that thinks why 2+2 cannot be always 4.

“To Err” and “To Forget”, “To show discretion”, “To do things  in a way it has never been done before”  are  human characters which today’s so called Artificial intelligence algorithms may not be exhibiting. Until such a situation arises, AI of today remains only a software and has to be treated as “Software” in terms of legal implications with responsibility for actions determined by the software development and license terms.

The Future

In the future, when a software is capable of behaving like a human with an ability to “Feel” and “Alter its behaviour based on the feeling” etc., we should consider that the software has become an AI in the real sense. The new laws of AI will then be applicable only when the software reaches the maturity level of a human.

In the case of laws applicable to humans, we have one set of laws applicable to a “Minor” and another set of laws applicable to an “Adult”. It is expected that a human becomes capable of taking independent decisions just after a certain age is attained. Though there is a serious flaw in this argument that at the stroke of midnight on a particular day, a person becomes an adult and we also  donot measure the age from the time of birth  along with the time zone.  We are living with this imperfect law all along. 

Now when we are considering the transition of a software to an AI, we need to consider if we introduce a more reliable measure of whether the software can be considered as AI for which a criteria has to be developed along with a system of testing and certifying.

In other words, all software remains a “Software” unless “Certified as AI”. As long as a software remains a software, the responsibility of the software remains with the original developer/owner or the licensee.  This is like a “Birth Certificate” in the case of a human being. The birth of an individual does not go on record until it is registered and certified. Similarly a software does not become eligible to be called “AI” unless it is registered and certified.

The “Certification” that a software is an “AI” has to be provided by a regulatory agency based on certain criteria. The argument that we put forth  is that the criteria has to take into account the ability of an AI to err, to forget, to show restraint, to be innovative etc. 

Can we develop such character mapping of an AI? and a new  thought of “AI Transaction Analysis”.

Dr Eric Berne postulated that a human appears to behave from three ego-states namely the Parent, Adult and Child. 

The accompanying diagram shows the typical description of the three ego states, PAC or Parent Adult, Child ego states.

The “Parent” Ego state in the context of an AI is the reflection of the “GIGO” principle that there are set instructions and the output is based on the input.

The “Adult” ego state in the context of AI is the reflection of unsupervised learning ability of an AI. It is a logical response to an input stimuli.

The “Child” Ego state is the creative and unpredictable nature of the humans.

Eric Berne and other researchers who followed him further divided the ego states. For example the Child ego state was sub divided into “Compliant” (Adapted child) and “rebellious” (Natural Child) and “Parent Ego state” was also divided into “Nurturing Parent” and “Critical Parent”.

It is time that we apply these principles to identifying the maturity status of AI, identify and certify the status of the AI. I call upon Behaviour scientists to come in and start contributing towards flagging a software as an AI by applying the PAC principle to AI.

For mapping PAC of an individual behaviour scientists have developed many tests. Similarly we need to design tests for AI categorization. I have experimented such scenario based tests during my stint as a faculty in the Bank in the early 1980s. But practitioners of behaviour science has many advanced tests to map the PAC state of a human and it can be applied to test and certify an AI also.

We need to think if AI regulation needs to take into account such classification of AI into its ego-states instead of the classification that has been adopted in the EU act.

Open for debate…

 

(OPEN FOR DEBATE)

Naavi

Also Refer:

“In quest of developing “I’m OK-You’re OK AI algorithm”

Case tracker

Posted in Cyber Law | Leave a comment