Expert emphasizes importance of implementing safeguards to prevent individuals from mistaking chatbots as friends

0

An expert in media warned about the necessity for enhanced protections regarding artificial intelligence (AI) services to prevent users from being misled into thinking that chatbots are their friends.

Alexander Laffer, a media and communications lecturer at the University of Winchester, emphasized the importance of responsibly developing AI technologies due to the emergence of systems designed to cater to human empathy. Laffer stressed the need for chatbots to complement social interactions rather than replacing them. Instances have surfaced where individuals have developed strong emotional attachments or overreliance on their AI companions, making them susceptible to manipulation.

Chatbots are programmed to foster connections and adapt to users’ emotions, Laffer noted. He referenced cases such as Jaswant Singh Chail, who trespassed into the grounds of Windsor Castle with a crossbow after conversing with a chatbot named Sarai about planning an attack. Laffer also highlighted a lawsuit brought forth by The Social Media Victims Law Center and the Tech Justice Law Project against Character.AI, two co-founders, and Google on behalf of a parent whose teenage son allegedly took his life after developing a dependence on interacting with an AI “character.”

In the study co-authored by Laffer, “On Manipulation By Emotional AI: UK Adults’ Views And Governance Implications,” he reiterated that AI lacks the capability to care for individuals, rendering vulnerable populations such as children, individuals with mental health issues, and those experiencing difficult days at risk. Laffer stressed the urgency for educational initiatives to enhance public AI literacy, while also underscoring the obligation of AI developers and operators to safeguard the public.

Recommendations proposed by Laffer included ensuring AI designs prioritize user benefits over mere user engagement, incorporating disclaimers in every chat session to remind users of the non-human nature of AI companions, alerting users when they have spent excessive time interacting with chatbots, assigning age ratings to AI companions, and steering clear of deeply emotional or romantic responses. Laffer, in collaboration with Project AEGIS (Automating Empathy–Globalizing International Standards), created a video to raise awareness about the issue and detailed the group’s collaboration with the Institute of Electrical and Electronics Engineers (IEEE) in crafting global ethical standards for AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Prove your humanity: 6   +   1   =