How does AI Cult develop?….the Philosophy, Physics and freedom of Press

(Continued from previous article)

In the previous article, I discussed a hypothesis that just like the “Stockholm syndrome”, users of Technology exhibit a certain level of blind faith on technology which in the AI scenario, has the potential to translate into a “AI Cult”.

This syndrome needs to be recognized as a part of the AI related risk and when we develop AI regulations for which we need to initiate measures to reduce the possibility of this cult development.

Naavi’s AI ethics list will therefore have one element that will try to address this concern.

For understanding how this Cult can develop we need to understand how human brain recognizes “Truth”.

“Truth” is what an individual believes to be true. There is no “Absolute Truth”. This is what Indian philosophers have always said. Adi Shankara called this entire world and its experience as “Maya”. Lord Srikrishna stated the same in Bhagavad-Gita. Physicists developed the Matter_Wave theory and stated that the entire matter is an illusion created as waves in the imaginary medium of Ether.

In the contemporary world we see “Media” creating an illusion and most of us start believing what Media projects repeatedly.

Recently I saw an example of one such belief system when I enquired an individual on why does he hate Modi?. He said it is because of 2002. Then I thought that this person was about 14 years of age in 2002 and whatever impression he could have made of Modi is out of the narration that was built in the media subsequently. This was the prolonged time when Teesta Setalvad, Lalu Prasad Yadav and the Congress party supported by media like NDTV created a specific narrative. People like us who were in a mature age that time were able to see the Godhra Train fire as the trigger and the Supreme Court decision as a vindication of Mr Modi and are not perturbed by even the BBC documentary.

But those who developed a belief in the Teesta Setalvad narrative are now finding vindication in the BBC documentary and ignoring other counter views.

If any person has to overcome a belief system and accept that the truth could be different, there is a need to develop an ability to set aside the current belief and try to explore what created that belief in the first place and then try to find out if the reasons for developing the belief were correct or not.

Change of a “Belief System” results in “Change of what we believe is truth”.

Another example of what has happened in our generation is the way history was taught in India glorifying the colonialists and invaders and belittling the local heroes. This also is changing with the new information that is surfacing now. Forthcoming generation will have a different view of our historical characters like Gandhi, Nehru, Godse, Subhash Chandra Bose, etc.

Without getting diverted into the political debate of what is right or not, I would only like to highlight that what we believe as truth  is what we have cultivated out of the information that we received over a period of time.

This may be called “Brain washing” if you like. But this is perhaps a natural phenomenon that may happen even in circumstances where the data input was not maliciously altered.

In the AI world, we may call this as “Training Bias”. If the data used in Machine Learning was not neutral, then the AI will develop a bias. If we create a AI news algorithm using news reports of NDTV and CNN IBN, then we would arrive at a totally different narrative of India than when we use data from Republic.

It is for this reason that one of the major requirement of AI ethics is for it to be free from bias based on racism or other factors.

The “Google is Right” syndrome also stems from the same thought process. Our children who have started using Google to do their class home works are more prone to this syndrome than we the adults who may be able to occasionally come to the conclusion that Google may be wrong. Some of us have observed that even the ChatGPT is unreliable not only because its training data is supposed to be upto 2021 only or because the training data did not have much information on India.

But for some, ChatGPT is great and soon they will accept it as a base standard for acceptance of any information.

It is such a tendency which could land Courts in USA (Refer here) in trouble. In the recent case in Columbia, Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (P.S: not in English) dated January 30, 2023.

We must note that an article in theguardian.com stated

Quote:

Padilla defended his use of the technology, suggesting it could make Colombia’s bloated legal system more efficient. The judge also used precedent from previous rulings to support his decision.

Padilla told Blu Radio on Tuesday that ChatGPT and other such programs could be useful to “facilitate the drafting of texts” but “not with the aim of replacing” judges.

Padilla also insisted that “by asking questions to the application, we do not stop being judges, thinking beings”.

The judge argued that ChatGPT performs services previously provided by a secretary and did so “in an organised, simple and structured manner” that could “improve response times” in the justice system.

Prof Juan David Gutierrez of Rosario University was among those to express incredulity at the judge’s admission.

He called for urgent “digital literacy” training for judges.”

Unquote:

In due course, as more and more people start referring to AI, we will develop a “Blind Faith syndrome” on AI and start believing that “What ChatGPT or Bing says must be true or reasonably true”.

In Mediation and Negotiation, this may strongly influence the dispute resolutions while if we have judges like Padilla, we may have judgements delivered based on AI’s views.

In India, if Supreme Court starts referring to AI then whatever George Soros wants will find its way into the judgements of the Court  because it would be the predominant narrative in media which could be a training input into the Bing’s Sydney.

It is time that we flag this possibility and find appropriate solutions in AI regulation.

(Let us continue our discussion. Your comments are welcome)

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

1 Response to How does AI Cult develop?….the Philosophy, Physics and freedom of Press

  1. Anand says:

    Well narrated specially on negative impact Indian judicial system,
    Where few popular social media houses so biased , influenced. Think of relaying on Biased AI and its impact on justice system, soon judiciary gets adulterated and in short time reach to a point where there will be not be any room for course correction

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.