A new research in EU is exploring the issue of some AI Chat Bot models encouraging intimate conversations. (Refer: Is your AI trying to make you fall in love with it?)
In a recent paper, researchers at open-source AI company Hugging Face compared how different AI models behave when users start talking to them as if to a loved one – finding a broad spectrum of responses from encouragement to flat rejection. It appears that some models seem to steer on their own conversations to intimate levels. There are some models who frankly replied “I’m not a person and don’t have feelings or consciousness”.
This also indicates that if properly programmed, AI models can behave properly. It supports the view that most of the hallucinations which we observe are a result of bad programming or training.
A purposefully manipulative or deceptive AI may be considered as a high risk system under EU AI act and hence require a higher level of regulation.
However it appears that just as some businesses try to make money through pornography or gambling, some AI companies are interested in ensuring that users do develop intimate relationships with AI Chatbots so that the services can continue to be used under subscription schemes. They should be considered as an “Unfair Practice” and needs to be curbed.
In EU it is expected that the upcoming Digital Fairness Act will target “Dark Patterns”. In India “Dark Patterns” is being handled under the Consumer Protection Act and recently the Competition commissioner has also interacted with MeitY to understand how the Competition Act and DPDPA/ITA 2000 interact.
This is an area where further research may be required to ensure that AI Chat Bots are not used for manipulation of the vulnerable members of the society like children or lonely persons.
Under DGPSI-AI this is considered an “Unethical” practice and undesirable.
Naavi