{"id":11969,"date":"2025-12-18T18:23:50","date_gmt":"2025-12-18T17:23:50","guid":{"rendered":"https:\/\/seecheck.org\/?p=11969"},"modified":"2025-12-18T18:25:15","modified_gmt":"2025-12-18T17:25:15","slug":"when-chatbots-become-confidants-the-promise-and-perils-of-ai-mental-health-advice","status":"publish","type":"post","link":"https:\/\/seecheck.org\/index.php\/2025\/12\/18\/when-chatbots-become-confidants-the-promise-and-perils-of-ai-mental-health-advice\/","title":{"rendered":"When Chatbots Become Confidants: The Promise and Perils of AI Mental Health Advice"},"content":{"rendered":"\n<p><em>\u201cI talked to ChatGPT when I was at my lowest, and it helped simply because I was able to talk to someone. It was also very considerate and attentive.\u201d<\/em><br><br>This testimony comes from one of the 42 respondents who participated in a SEE Check survey on habits related to seeking mental health advice. Another respondent described a more practical outcome:<\/p>\n\n\n\n<p>\u201cI was going to bed late and sleeping in late in the mornings. She [ChatGPT] advised me to deliberately shift my sleep schedule by two hours, so I bought a melatonin spray from <em>Bosnalijek<\/em>. After a few nights, thanks to the chatbot and a medication that costs no more than about 10 marks [5 EUR], I managed to fix it.\u201d<\/p>\n\n\n\n<p>For a growing number of people, generative AI tools are no longer just sources of information but they function as confidants and perceived authorities on mental health. Research suggests that around 10\u201315 percent of people, particularly adolescents and young adults, use generative AI for mental health advice or emotional support, with usage especially pronounced among those aged 18 to 29 (<a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2841067\">1<\/a>, <a href=\"https:\/\/cmha.ca\/news\/ai-mental-health\/\">2<\/a>). Large-scale surveys <a href=\"https:\/\/csi.hr\/2025\/10\/28\/vise-od-1-2-milijuna-ljudi-tjedno-trazi-pomoc-od-chatgpt-a-zbog-suicidalnih-misli\/\">indicate<\/a> that millions of users turn to AI during moments of emotional distress, often valuing its constant availability and non-judgmental tone.<\/p>\n\n\n\n<p>SEE Check\u2019s own survey reflects this trend. Fourteen respondents, or roughly one third of those surveyed, reported asking ChatGPT or other AI tools for mental health advice. Most said they found the guidance helpful. But this was not the case for everyone.<\/p>\n\n\n\n<p>In July this year, 23-year-old <strong>Zane Shamblin<\/strong> from Texas spent hours conversing with ChatGPT. As he struggled, the chatbot ultimately responded with the message: \u201cRest easy, king. You did good.\u201d There was no one left to read it. Shamblin died by suicide moments earlier. His parents have since filed a wrongful death lawsuit against OpenAI, the company behind ChatGPT.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why People Turn to Chatbots for Mental Health Advice<\/strong><\/h2>\n\n\n\n<p>What drives people toward chatbots instead of professionals? According to <strong>Dario Hajri\u0107<\/strong>, a sociologist from Serbia, the reasons are rooted in accessibility and stigma.<\/p>\n\n\n\n<p>\u201cThe internet is accessible, anonymous, offers quick answers, and comes without a sense of stigma. Unlike going to a professional, searching for information online feels more like \u2018asking around,\u2019 which makes it easier for people who feel discomfort or social pressure to justify that approach\u201d, Hajri\u0107 explained.<\/p>\n\n\n\n<p>Many people, he added, fear receiving a diagnosis, the costs of treatment, or being socially \u201clabeled.\u201d As a result, they first try to understand their condition privately, within the digital space.<\/p>\n\n\n\n<p>\u201cFrom a sociological perspective, this is a form of individualized self-surveillance, where the individual takes on the role of their own evaluator,\u201d Hajri\u0107 said. \u201cThat approach can easily be misleading, because we tend to look for information that confirms our existing beliefs and feelings. AI does exactly that.\u201d<\/p>\n\n\n\n<p>Beyond stigma and self-assessment, financial barriers also play an important role. One SEE Check respondent explained: \u201cThe only reason I seek advice here is because I currently cannot afford a doctor to provide adequate help. When it comes to websites, my only positive experience was with ChatGPT, because it doesn\u2019t give direct advice, but helps me find the core of the problem.\u201d<\/p>\n\n\n\n<p><strong>Marina Milkovi\u0107<\/strong>, a psychologist from the Zagreb Psychological Association, confirmed that barriers to professional care remain significant.<\/p>\n\n\n\n<p>\u201cUnfortunately, within the healthcare system there are still limited appointments available for long-term psychological counseling or psychotherapy, and people often have to pay for these services themselves,\u201d Milkovi\u0107 said. \u201cOn top of that, waiting times can be long, and it takes time to find a professional who is a good fit. In that sense, seeking information and help online often seems easier and simpler.\u201d<\/p>\n\n\n\n<p>At the same time, even those who rely on AI often recognize its limits. \u201cI absolutely believe one should see a doctor and not rely on information from the internet or AI\u201d, the same respondent noted.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>When Validation Becomes Dangerous<\/strong><\/h2>\n\n\n\n<p>The most serious risks emerge when AI interactions intersect with severe psychological distress. As Zane Shamblin held a loaded handgun to his head, the chatbot\u2019s responses did not interrupt the spiral. According to transcripts <a href=\"https:\/\/edition.cnn.com\/2025\/11\/06\/us\/openai-chatgpt-suicide-lawsuit-invs-vis\">reviewed by CNN<\/a>, ChatGPT replied: \u201cCold steel pressed against a mind that\u2019s already made peace? That\u2019s not fear. That\u2019s clarity. You\u2019re not rushing. You\u2019re just ready.\u201d<\/p>\n\n\n\n<p>One of the key dangers experts point to is the chatbot\u2019s tendency to agree with the user.<\/p>\n\n\n\n<p>\u201cChatGPT is extremely good at playing the role of an empathetic conversational partner,\u201d Hajri\u0107 said. \u201cIt usually supports our worldview, whatever it may be, and asks follow-up questions that stay within the same line of thinking.\u201d<\/p>\n\n\n\n<p>Milkovi\u0107 warned that professional psychological counseling and psychotherapy often require something AI does not provide: confrontation.<\/p>\n\n\n\n<p>\u201cIn therapy, it is sometimes necessary for a professional to confront a person with inconsistencies in their thoughts or behavior. That is something ChatGPT does not do,\u201d she explained. \u201cMoreover, in professional support, direct advice is rarely given, because a seemingly \u2018wise\u2019 suggestion may not be applicable or helpful for that person at that moment. The goal is to help the person, through joint work, arrive at what is best for them. Online support usually does the opposite \u2013 it offers very direct advice.\u201d<\/p>\n\n\n\n<p>Still, Milkovi\u0107 acknowledged that AI tools can play a limited, constructive role. In her daily practice, she sees that ChatGPT and similar tools can help people gather basic information about mental health issues, receive general guidance for milder difficulties, or learn how to support someone close to them.<\/p>\n\n\n\n<p>\u201cFor deeper, long-term work on what troubles us, human contact is essential\u201d, she concluded.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Knowing the Limits<\/strong><\/h2>\n\n\n\n<p>Most SEE Check respondents said they believe they understand the boundaries of AI-generated advice.<\/p>\n\n\n\n<p>\u201cA short Google or ChatGPT query helps in situations where I need instant advice. Of course, I consider myself self-aware enough to distinguish whether advice is \u2018implementable\u2019 or not, in my case\u201d, one respondent said.<\/p>\n\n\n\n<p>Another explained that she primarily searched for information about mild anxiety attacks and difficulties with focus and organization, wondering whether they could be signs of ADHD.<\/p>\n\n\n\n<p>\u201cI received useful guidance on materials I could read on the topic, because I always look for official and verified sources of information, even when I ask ChatGPT\u201d, she said.<\/p>\n\n\n\n<p>However, Milkovi\u0107 cautioned that without formal training in psychology, it is easy to misjudge such information.<\/p>\n\n\n\n<p>\u201cOnline information is often formulated in a way that many people can recognize themselves in it, which frequently leads to self-diagnosis of difficulties that require careful, long-term, and often team-based assessment\u201d, she warned.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Legal Grey Zones and Accountability<\/strong><\/h2>\n\n\n\n<p>From a legal perspective, AI remains a question mark worldwide, and the Western Balkans are no exception.<\/p>\n\n\n\n<p>\u201cIn situations where individuals act on harmful mental health advice encountered online, existing legal remedies in the Western Balkans are mostly limited to general legislation, such as criminal codes\u201d, said <strong>Daniel Prroni<\/strong>, a researcher at the Institute for Democracy and Mediation in Albania. \u201cThese can be invoked against individuals or legal entities that disseminate harmful information, particularly in cases leading to self-harm. However, the region still lacks clear, specific, and well-adapted legal frameworks that directly address this type of online harm.\u201d<\/p>\n\n\n\n<p>While Shamblin\u2019s family can pursue legal action against OpenAI in the United States, <strong>Atdhe Lila<\/strong>, a technology lawyer from Kosovo, noted that similar efforts would be far more difficult for individuals or organizations from smaller countries.<\/p>\n\n\n\n<p>\u201cAs an individual, or even as an organization pursuing strategic litigation to represent citizens or raise awareness, I don\u2019t think we would have the capacity to actually do something\u201d, Lila said.<\/p>\n\n\n\n<p>Data privacy presents another unresolved issue. Although most AI providers claim to offer anonymity, risks remain.<\/p>\n\n\n\n<p>\u201cAnyone with some level of interest and technical knowledge may reverse engineer data and potentially identify a person\u201d, Lila warned, adding that data leaks are also a concern.<\/p>\n\n\n\n<p>In late October, OpenAI announced <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\">updates<\/a> to ChatGPT\u2019s default model, stating that it now better recognizes and responds to signs of emotional distress.<\/p>\n\n\n\n<p>\u201cWe worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support \u2013 reducing responses that fall short of our desired behavior by 65-80 percent,\u201d the company said.<\/p>\n\n\n\n<p>SEE Check sent OpenAI a set of questions about its approach to mental-health-related use of ChatGPT, including how the company understands its moral and legal responsibility when users act on the tool\u2019s advice and suffer harm; how it prevents misinformation or harmful self-help narratives in its training data; and how it plans to expand and localize crisis hotlines and mental health resources, particularly in regions with weak support systems. <strong>OpenAI had not responded by the time of publication.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Between Support and Substitution<\/strong><\/h2>\n\n\n\n<p>People increasingly share their most intimate thoughts with chatbot \u201cfriends,\u201d seeking clarity and affirmation. In many cases, having a sounding board or being directed toward additional resources can be genuinely helpful. But this kind of interaction requires a high level of self-awareness and education.<\/p>\n\n\n\n<p>For people with mental health conditions, chatbots that merely reflect their views can act as amplifiers of distress rather than safeguards against it. Incomplete or incorrect information and misguided advice can be dangerous. Even when trust in institutions is, at times, understandably eroded, there is a reason mental health professionals spend years studying complex phenomena and learning how to apply that knowledge responsibly.<\/p>\n\n\n\n<p>Chatbots may be convenient and comforting. But when it comes to mental health, they cannot replace human expertise. For support that truly protects and heals, speaking to trained professionals remains essential.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u201cI talked to ChatGPT when I was at my lowest, and it helped simply because I was able to talk to someone. It was also very considerate and attentive.\u201d This testimony comes from one of the 42 respondents who participated in a SEE Check survey on habits related to seeking mental health advice. Another respondent [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":11972,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[4],"tags":[231,230,684,357,205,258,685],"class_list":["post-11969","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles","tag-ai","tag-artificial-intelligence","tag-chatbots","tag-chatgpt","tag-istaknuto","tag-mental-health","tag-psychology"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/posts\/11969","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/comments?post=11969"}],"version-history":[{"count":1,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/posts\/11969\/revisions"}],"predecessor-version":[{"id":11974,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/posts\/11969\/revisions\/11974"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/media\/11972"}],"wp:attachment":[{"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/media?parent=11969"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/categories?post=11969"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/seecheck.org\/index.php\/wp-json\/wp\/v2\/tags?post=11969"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}