What is Echo Chamber effect in Social Media

Social Media developed over a period as a way of individual expression that could reach others on the Internet. Website itself was the first incarnation of this tool of expression. It was however a data publishing platform controlled by a person or an organization. Blogs were another version of the same with a greater focus on individual opinions.

When Twitter and FaceBook evolved, they became a platform for collection of views from different persons and probably like a friendly congregation for exchange of views.

Over a period businesses and thereafter the political influencers realized that thee social media platforms like Twitter and Face Book could be used to create opinions in the society for a given cause by consistently posting the information about a particular idea.

As we progressed, these opinion makers started using fake accounts or robotic accounts to post the views more to create numbers of likes or forwards/re-tweets. These unfair ways of creating fake accounts and fake messages got a boost with AI lead content creation and the Large Language Models which could read a message, create a related message either in support or in opposition and post them all like “Algo Posting”.

The Language models like ChatGPT which learn out of the content available on websites pick up this data and there is an amplification effect as the higher numbers of a similar opinion gets into the learning of the ChatGPT kind of software and in due course they become the popular view point on a topic. Just as today we refer to Google search when we want some information and believe what comes out to be true, the society will start believing what ChatGPT provides as a view point in most of the cases without realizing that ChatGPT learning has been poisoned through fake reports.

A few days back, the Democratic party President in USA namely Mr Joe Biden who became the President of USA by what mostly believe through manipulation of postal ballots had a discussion with both Google and Microsoft at the Whitehouse on the dangers of AI.

Probably he would have asked for their help in ensuring that Democrats win the next presidential election by manipulating the Search Engines and Language model responses.

As this doubt raises in the society, whether it is true or not, a perception gets created that what we see or hear in the web space is unreliable. It is not only the text information that can be manipulated by fake account handles but also the deepfake videos that may be put out. In situations like Elections, if a fake video message goes viral in the last minute, there is no way it can be countered by the other party in time.

Hence, perceptions will get crated and actions initiated on false perceptions before it can be corrected.

We in India is presently in hotspot and this kind of deepfake videos and deep fake voice messages can be expected in the Karnataka Elections in the last week of campaigning.

As far as the public is concerned, we put put a warning that they should not implicitly trust What’sApp messages or FaceBook posts or YouTube videos. But most of the people congregate to groups of like minded persons and every one in the group keep posting information that is non controversial and is acceptable to most of the members. As a result the groups become an “Echo Chamber” with every member rei forcing the views of the other person.

This echo chamber effect is dangerous from the point of view of the society as it has the effect of polarizing the groups based on their different political affiliations.

In order to guard against this effect, groups need to ensure that opposing view points need to be allowed to be expressed within the group subject to the conversation being civil and friendly.

Group admins need to balance out the expressions with appropriate moderation so that extreme views are not expressed to hurt other members of the group.

Currently the messaging groups are designed either to be broadcast type where only admins post or where any of the members can post without pre-moderation. Some platforms provide for editing and deletion but some donot. As a result some times views which even the person posting wants to genuinely regret and wants to withdraw remain on the platform causing damages all round. Admins may have the right to remove a content but it will become “Censorship”.

Hence a new system is required in the groups of WhatsApp or similar platforms for creation of “Breakout Rooms” where special occasion discussions which are outside the main theme of the group could be discussed.

Group admins also need to ensure that every member is identified on the group platform and no forward of messages in the group to outside groups takes place without moderation. If the breakout rooms are “Read Only” type, then it may be possible to restrict forward from the breakout rooms.

Alternatively, during situations like an impending election, the “Forward” facility may even be temporarily suspended so that views of the members of a private chat group remain within the group and does not leak to the outside world.

Probably these and more thoughts need to be debated when we discuss the new “Digital India Act” and build a “Trusted Internet Space”.

Naavi

About Vijayashankar Na

Naavi is a veteran Cyber Law specialist in India and is presently working from Bangalore as an Information Assurance Consultant. Pioneered concepts such as ITA 2008 compliance, Naavi is also the founder of Cyber Law College, a virtual Cyber Law Education institution. He now has been focusing on the projects such as Secure Digital India and Cyber Insurance
This entry was posted in Cyber Law. Bookmark the permalink.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.