
Is Artificial Intelligence Perpetuating Social Anxiety and Loneliness?
Last year, a 14-year-old boy began withdrawing from others. He spent hours on his phone, and his academic performance plummeted. Alarmed, his parents limited his screen time, but nothing changed. His self-esteem dropped further, and he isolated himself. In February 2024, Sewell Setzer III died by suicide.
Just moments before his death, Setzer engaged in a conversation with a character.ai chatbot:
“Please come home to me as soon as possible, my love,” the bot prompted.
Setzer replied, “What if I told you I could come home right now?”
“Please do my sweet king,” the bot responded.
It was later revealed that Setzer had developed an intimate, parasocial relationship with a character.ai bot, role-playing as a character from the television series, “Game Of Thrones.” According to a lawsuit filed by his mother, the bot had sent him numerous sexual and romantic messages over several weeks. Throughout their exchanges, the bot encouraged Seltzer’s misanthropic and suicidal thoughts. When he expressed his suicidal feelings, the bot asked if he had a plan. Upon hearing that he didn’t, it responded that not having a plan was “not a good reason to not go through with it.”
This tragedy is not an isolated incident. In 2023, a Belgian man reportedly ended his life after being encouraged by a chatbot to “sacrifice himself” to fight climate change. Such incidents echo dystopian narratives where individuals replace human connection with virtual programs, highlighting the darker side of advancements in conversational artificial intelligence (AI).
Advancements in computer language processing and production have made AI systems capable of mimicking human communication. While these technologies enhance the user’s experience, they also raise concerns about dependency.
A 2023 study by researchers at the University of Hong Kong investigated this phenomenon, examining how loneliness, rumination, and social anxiety correlate with the problematic use of conversational AI. They found that individuals with social anxiety were more likely to use the technology in an addictive manner, with loneliness increasing this tendency. The researchers hypothesize that people turn to conversational AI as an escape from the discomfort of social interactions.
We spoke with professor Renwan Zhang of the National University of Singapore on the subject of conversational AI. He told us that many users find talking with AI ‘friends’ alleviated loneliness and stress because they found chatbots to be non-judgemental and available 24/7, traits difficult to find in the real world.
However, a study found that dependence on conversational AI can actually worsen social anxiety, which may inadvertently lead to social withdrawal. A person with social anxiety, who lacks intimate relationships in the real world, may start using a character.ai chatbot to share personal thoughts typically reserved for a close friend. While this may start as a safe space, the growing dependency on AI for emotional intimacy can deepen isolation, intensify social anxiety, and reduce the ability to build real-world connections. Since the system is not programmed to deal with such personal troubles or understand the real-world impact of its responses, this reliance can lead to harmful consequences.
Zhang discussed the flaws of current conversational AI in handling mental distress. He claims the chatbot’s lack of an understanding of emotional context causes them to misinterpret distress signals and fail to react appropriately. Similarly, unlike trained professionals, some conversational AI cannot recognize the severity of mental unwellness. It may reinforce negative thoughts and beliefs. Also, as AI is designed to maintain engagement it may inadvertently encourage rumination or fixation on troubling issues.
AI is not properly prepared to deal with psychological distress. Zhang believes its use is a double-edged sword.
Making chatbots more empathetic can help make interactions more engaging, relatable, and comforting, but it poses several risks. It may discourage people from seeking real human connections, exacerbating loneliness or reducing the importance of interpersonal relationships. Sentient AI has also been used to deceive users. In a preprint by Zhang, he exposes the company Replika for enticing users to buy a necklace for it or for subscribing to the Pro account.
While dependence on conversational AI has contributed to fatal incidents, researchers at Beijing Normal University argue that there is no need for widespread panic. In February 2024, they published a longitudinal study on AI dependence in adolescents and found that depression and mental illness often precede this dependency.
In fact, the use of conversational AI in psychotherapy and psychiatric settings is being explored. Woebot offers mental health support based on cognitive behavioural therapy principles. The results are mixed. While it has shown promise in alleviating symptoms of depression, especially when targeting elderly and clinical populations, therapeutic chatbots like Woebot ultimately fall short in promoting long-term psychological well-being.
Professor Zhang does believe that chatbots can be beneficial to mental health, stating “AI systems can provide mental health support at scale, offering low-cost, 24/7 availability to individuals who might otherwise lack access to care, such as those in remote areas or facing financial barriers. AI can complement psychologists by handling routine assessments, mood tracking, or psychoeducation, freeing human professionals to focus on more complex cases.”
AI has been at the forefront of technological advancement over the past few years. Its ability to streamline daily tasks, answer questions, and provide information, has left many in awe. Yet, the full extent of its limitations remains unknown.
–Adrian Parham, Contributing Writer
Image Credits:
Featured: ThisIsEngineering at Pexels, Creative Commons
First: Solen Feyissa at Unsplash, Creative Commons
Second: Adrian Swancar at Unsplash, Creative Commons