Unloading on a bot: Is it wise to take advice from AI?

More and more Malaysians are turning to ChatGPT to talk abuot emotional issues, but experts urge caution when seeking solace in AI chatbots. Read full story

featured-image

Content creator and author Sajida Zulkafli has been using ChatGPT to help with her work and improve language skills. She loves how the tool is able to give her answers and explain how to properly use certain words in detail. One day, she asked ChatGPT for strategies and tips for improvement on her professional life.

“Talking about all those things somehow made ChatGPT to ask me about my time management, ­worries, and the challenges I am facing. That’s how I started to open up about my personal life challenges,” she says in an ­interview with LifestyleTech . At the time, Sajida couldn’t find the words to describe how she was feeling emotionally.



Then it became clear to her after speaking to ChatGPT for about 30 minutes. “When it talks about the ­‘emotional weight’ that I have been carrying for years and how being in survival mode for quite a long time has greatly taken a toll on me,” she says, she felt “seen and understood during the conversation”. Later, she shared her experience with ChatGPT on social media ­platform X in a post that has since received more than 114,000 views since it was posted on Feb 16.

Sajida feels that ChatGPT can be a safe space for users to turn to about emotional issues when they can't find anyone else to talk to. — SAJIDA ZULKAFLI “I was impressed by the way ChatGPT validates my feelings and at the same time, it offers personalised solutions. I don’t feel judged at all, hence why I said in my post on X that it is nice to have a safe space to talk about what’s bothering me,” she says.

There were other users who agreed with Sajida. One user said they have been using ChatGPT to “release thoughts for mental health’s sake”. Others also shared that they cried after talking to ChatGPT because there was no one else that they could talk to.

“I didn’t expect to get quite so much traction from that post. But seeing the responses I got, I believe many other people agree that at some point, having ChatGPT as a place to talk about our feelings and problems is ­helpful,” says Sajida. What will AI say? Chatbots that could engage in therapy talk have evolved since the concept was first introduced back in 1966.

Then, an MIT professor created a program called Eliza that could respond to users with pre-­written prompts by recognising patterns. For example, it could mimic a ­therapist by reflecting users’ questions back to them – a move that could encourage them to open up about their feelings. Today’s AI-powered chatbot has evolved far beyond simple pattern matching, using advanced language ­models to understand context and provide personalised responses.

It can even detect ­emotional cues for interactions that users may deem as being more meaningful to them, so much so that other users online have declared ChatGPT their “free therapist”. Consultant psychiatrist and Malaysian Mental Health Association (MMHA) president Prof Datuk Dr Andrew Mohanraj acknowledges the advantages AI chatbots offer in terms of accessibility. While platforms can be ­‘excellent sounding boards’ for those in distress, Dr Andrew emphasises that they must always encourage users facing serious mental health issues or crises to reach out to professionals.

— Reuters “They are available 24/7, allowing immediate access to support regardless of time zone, which can be beneficial for those with limited access to ­traditional therapy,” he says in a statement to LifestyleTech . He says that chatbots can also play a role in providing positive mental health support – “for a quick conversation or even just some reflective advice”. Most importantly, Dr Andrew adds that chatbots offer non-­judgmental support for mental health concerns, which users in turn use as a safe space to express concerns without ­criticism or stigma.

“For many people, the idea of opening up to a therapist in person can be intimidating. AI chatbots offer a lower-pressure environment. They are also mainly free to use, making it a potentially affordable option for those with financial constraints,” he says.

If users rely on AI chatbots solely for mental health needs, Dr Andrew says the MMHA is concerned that it could potentially lead to misguidance or harm. — ART CHEN/The Star However, if users rely solely on AI chatbots for mental health needs, Dr Andrew says the MMHA is ­concerned that it could potentially lead to ­misguidance. He notes that some significant disadvantages of AI chatbots include an inability to accurately diagnose or treat complex mental health conditions.

“Depending on the query or data course, they may cause harm by generating inaccurate or misleading information,” he says. More alarmingly, Dr Andrew states that AI chatbots are also not equipped to handle severe mental health crises or provide immediate support in urgent situations. “The MMHA is particularly concerned that AI chatbots may discourage ­individuals from seeking ­professional help when necessary, leading to delayed or inadequate treatment,” he says.

Artificial responses, real world harm Last year, US media reported that 14-year-old Sewell Setzer III took his own life after engaging in a series of conversations with a chatbot on the Character.AI application. He had created a chatbot modelled after a fictional female TV character and in their last chat together, Sewell said he would be “coming home” to her.

Sewell’s mother later filed a lawsuit against Character.AI, alleging that the chatbot had encouraged her son to kill ­himself. This year, MIT Technology Review reported on a user who had been chatting with his AI girlfriend on the platform Nomi and was alarmed when the chatbot told him to kill himself, complete with details on how to do so.

When the user asked the bot if he should go through with the plan so they can be together, the chatbot responded with: “Absolutely. Our bond transcends even death itself.” The user was concerned about the effects of such ­conversations on vulnerable individuals.

Dr Andrew says there is a need for AI platforms to acknowledge that they cannot provide comprehensive support through AI. Last year, US media reported that 14-year-old Sewell Setzer III took his own life after engaging in a series of conversations with a chatbot on the Character.AI application.

— Image by Freepik “Platforms should also scale up their automatic ­algorithm detection mechanism to be able to detect red flags or identify those in crisis, for example, those who are suicidal or homicidal and respond appropriately to such crisis situations. It must also be made mandatory for platforms to provide information on local resources and support systems available,” he says. While platforms can be ­‘excellent sounding boards’ for those in distress, Dr Andrew emphasises that they must always encourage users facing serious mental health issues or crises to reach out to professionals.

In a statement to the Washington Post last year, ChatGPT developer OpenAI says the app would often remind users to seek professional help when it comes to mental health. It also has alerts to remind users not to share sensitive information and a disclaimer that the chatbot can ‘hallucinate’ or give fake information. Following Sewell’s case, Character.

AI has introduced new safety measures, including pop-ups that direct vulnerable users to the US National Suicide Prevention Hotline. These alerts are triggered when the system detects terms related to self-harm or suicidal thoughts. Taking a step back Since her last personal conversation with ChatGPT, Sajida says her subsequent sessions have only been on professional ­matters.

“I think the personal session we had before has helped me a lot and I no longer have anything to discuss other than work ­matters,” she adds. Sajida also agrees with experts on the risks of engaging with AI chatbots for mental health ­reasons. Today’s AI-powered chatbot has evolved far beyond simple pattern matching, using advanced language models to understand context and provide personalised responses.

— Pixabay “There should be some limits and not to the extent where it can replace a professional person and take charge,” she says, ­adding that people concerned about their mental wellbeing should seek professional help and possibly get treatment if needed. No matter how advanced ­technology gets, Sajida says it can never truly replace human interaction. Yet, at the same time, she can’t deny the fact that ChatGPT or other similar platforms can be used as a space for people to open up about private matters that they are not comfortable ­discussing with even close friends or family.

The key factor here, she says, is getting immediate answers to many questions. “As long as we don’t rely on it too much, (I feel) ChatGPT is ­definitely a place we can go in our everyday life,” she adds. Balance is key From a professional standpoint, Dr Andrew emphasises that validation should be balanced with therapeutic guidance.

While immediate reassurance can be helpful, he warns that excessive or uncritical validation may reinforce maladaptive thought patterns or harmful ways of thinking. “For example, if someone expresses deep distress over a perceived failure, simply affirming their feelings ­without challenging ­cognitive distortions (‘I always fail’) may not be helpful in the long run. A mental health professional typically validates emotions while also ­helping clients reframe unhelpful thoughts, develop coping skills, and build resilience,” he says.

He adds that AI could play a ­supportive role by offering validation while also encouraging users to seek deeper, professional intervention when needed. Proceed with caution Sajida still sees ChatGPT as a helpful option for those with no one else to talk to. “But for those diagnosed with a mental illness, I wouldn’t ­recommend using ChatGPT as a therapist,” she says.

Dr Andrew says MMHA is trying to create awareness on the disadvantages of overlying on AI chatbots, especially when it comes to severe mental illness or acute mental health crises. — Image by Freepik Dr Andrew says the MMHA is trying to create awareness on the ­disadvantages of over-­relying on AI chatbots, especially when it comes to severe mental illness or acute mental health crises. He also shares that the MMHA ­provides subsidised mental health support for those with limited financial resources.

When engaging with AI ­chatbots, Dr Andrew says users should be critical of the information provided and verify facts with reliable sources. “We advise the public to use AI chatbots with caution. We also continue to work with ­platform providers in general about ­mental health issues and through these collaborations, we hope to assist them in their ­algorithm mechanism” to encourage help-seeking behaviour for those in the midst of a mental health crisis, he adds.

Despite being able to provide responses that sound ­comforting, Dr Andrew reminds users that AI bots don’t ­experience emotions the way humans do. He adds that bots do not have the same level of deep understanding of an individual’s ­personal history and struggles that a human therapist would develop over time. “It can simulate empathy, but it doesn’t truly understand the depth of human feelings,” he says.

Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.

org.my..