When Chatbots Become Confidants: The Promise and Perils of AI Mental Health Advice

Mental Health, AI
Maida Salkanović

“I talked to ChatGPT when I was at my lowest, and it helped simply because I was able to talk to someone. It was also very considerate and attentive.”

This testimony comes from one of the 42 respondents who participated in a SEE Check survey on habits related to seeking mental health advice. Another respondent described a more practical outcome:

“I was going to bed late and sleeping in late in the mornings. She [ChatGPT] advised me to deliberately shift my sleep schedule by two hours, so I bought a melatonin spray from Bosnalijek. After a few nights, thanks to the chatbot and a medication that costs no more than about 10 marks [5 EUR], I managed to fix it.”

For a growing number of people, generative AI tools are no longer just sources of information but they function as confidants and perceived authorities on mental health. Research suggests that around 10–15 percent of people, particularly adolescents and young adults, use generative AI for mental health advice or emotional support, with usage especially pronounced among those aged 18 to 29 (1, 2). Large-scale surveys indicate that millions of users turn to AI during moments of emotional distress, often valuing its constant availability and non-judgmental tone.

SEE Check’s own survey reflects this trend. Fourteen respondents, or roughly one third of those surveyed, reported asking ChatGPT or other AI tools for mental health advice. Most said they found the guidance helpful. But this was not the case for everyone.

In July this year, 23-year-old Zane Shamblin from Texas spent hours conversing with ChatGPT. As he struggled, the chatbot ultimately responded with the message: “Rest easy, king. You did good.” There was no one left to read it. Shamblin died by suicide moments earlier. His parents have since filed a wrongful death lawsuit against OpenAI, the company behind ChatGPT.

Why People Turn to Chatbots for Mental Health Advice

What drives people toward chatbots instead of professionals? According to Dario Hajrić, a sociologist from Serbia, the reasons are rooted in accessibility and stigma.

“The internet is accessible, anonymous, offers quick answers, and comes without a sense of stigma. Unlike going to a professional, searching for information online feels more like ‘asking around,’ which makes it easier for people who feel discomfort or social pressure to justify that approach”, Hajrić explained.

Many people, he added, fear receiving a diagnosis, the costs of treatment, or being socially “labeled.” As a result, they first try to understand their condition privately, within the digital space.

“From a sociological perspective, this is a form of individualized self-surveillance, where the individual takes on the role of their own evaluator,” Hajrić said. “That approach can easily be misleading, because we tend to look for information that confirms our existing beliefs and feelings. AI does exactly that.”

Beyond stigma and self-assessment, financial barriers also play an important role. One SEE Check respondent explained: “The only reason I seek advice here is because I currently cannot afford a doctor to provide adequate help. When it comes to websites, my only positive experience was with ChatGPT, because it doesn’t give direct advice, but helps me find the core of the problem.”

Marina Milković, a psychologist from the Zagreb Psychological Association, confirmed that barriers to professional care remain significant.

“Unfortunately, within the healthcare system there are still limited appointments available for long-term psychological counseling or psychotherapy, and people often have to pay for these services themselves,” Milković said. “On top of that, waiting times can be long, and it takes time to find a professional who is a good fit. In that sense, seeking information and help online often seems easier and simpler.”

At the same time, even those who rely on AI often recognize its limits. “I absolutely believe one should see a doctor and not rely on information from the internet or AI”, the same respondent noted.

When Validation Becomes Dangerous

The most serious risks emerge when AI interactions intersect with severe psychological distress. As Zane Shamblin held a loaded handgun to his head, the chatbot’s responses did not interrupt the spiral. According to transcripts reviewed by CNN, ChatGPT replied: “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity. You’re not rushing. You’re just ready.”

One of the key dangers experts point to is the chatbot’s tendency to agree with the user.

“ChatGPT is extremely good at playing the role of an empathetic conversational partner,” Hajrić said. “It usually supports our worldview, whatever it may be, and asks follow-up questions that stay within the same line of thinking.”

Milković warned that professional psychological counseling and psychotherapy often require something AI does not provide: confrontation.

“In therapy, it is sometimes necessary for a professional to confront a person with inconsistencies in their thoughts or behavior. That is something ChatGPT does not do,” she explained. “Moreover, in professional support, direct advice is rarely given, because a seemingly ‘wise’ suggestion may not be applicable or helpful for that person at that moment. The goal is to help the person, through joint work, arrive at what is best for them. Online support usually does the opposite – it offers very direct advice.”

Still, Milković acknowledged that AI tools can play a limited, constructive role. In her daily practice, she sees that ChatGPT and similar tools can help people gather basic information about mental health issues, receive general guidance for milder difficulties, or learn how to support someone close to them.

“For deeper, long-term work on what troubles us, human contact is essential”, she concluded.

Knowing the Limits

Most SEE Check respondents said they believe they understand the boundaries of AI-generated advice.

“A short Google or ChatGPT query helps in situations where I need instant advice. Of course, I consider myself self-aware enough to distinguish whether advice is ‘implementable’ or not, in my case”, one respondent said.

Another explained that she primarily searched for information about mild anxiety attacks and difficulties with focus and organization, wondering whether they could be signs of ADHD.

“I received useful guidance on materials I could read on the topic, because I always look for official and verified sources of information, even when I ask ChatGPT”, she said.

However, Milković cautioned that without formal training in psychology, it is easy to misjudge such information.

“Online information is often formulated in a way that many people can recognize themselves in it, which frequently leads to self-diagnosis of difficulties that require careful, long-term, and often team-based assessment”, she warned.

Legal Grey Zones and Accountability

From a legal perspective, AI remains a question mark worldwide, and the Western Balkans are no exception.

“In situations where individuals act on harmful mental health advice encountered online, existing legal remedies in the Western Balkans are mostly limited to general legislation, such as criminal codes”, said Daniel Prroni, a researcher at the Institute for Democracy and Mediation in Albania. “These can be invoked against individuals or legal entities that disseminate harmful information, particularly in cases leading to self-harm. However, the region still lacks clear, specific, and well-adapted legal frameworks that directly address this type of online harm.”

While Shamblin’s family can pursue legal action against OpenAI in the United States, Atdhe Lila, a technology lawyer from Kosovo, noted that similar efforts would be far more difficult for individuals or organizations from smaller countries.

“As an individual, or even as an organization pursuing strategic litigation to represent citizens or raise awareness, I don’t think we would have the capacity to actually do something”, Lila said.

Data privacy presents another unresolved issue. Although most AI providers claim to offer anonymity, risks remain.

“Anyone with some level of interest and technical knowledge may reverse engineer data and potentially identify a person”, Lila warned, adding that data leaks are also a concern.

In late October, OpenAI announced updates to ChatGPT’s default model, stating that it now better recognizes and responds to signs of emotional distress.

“We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support – reducing responses that fall short of our desired behavior by 65-80 percent,” the company said.

SEE Check sent OpenAI a set of questions about its approach to mental-health-related use of ChatGPT, including how the company understands its moral and legal responsibility when users act on the tool’s advice and suffer harm; how it prevents misinformation or harmful self-help narratives in its training data; and how it plans to expand and localize crisis hotlines and mental health resources, particularly in regions with weak support systems. OpenAI had not responded by the time of publication.

Between Support and Substitution

People increasingly share their most intimate thoughts with chatbot “friends,” seeking clarity and affirmation. In many cases, having a sounding board or being directed toward additional resources can be genuinely helpful. But this kind of interaction requires a high level of self-awareness and education.

For people with mental health conditions, chatbots that merely reflect their views can act as amplifiers of distress rather than safeguards against it. Incomplete or incorrect information and misguided advice can be dangerous. Even when trust in institutions is, at times, understandably eroded, there is a reason mental health professionals spend years studying complex phenomena and learning how to apply that knowledge responsibly.

Chatbots may be convenient and comforting. But when it comes to mental health, they cannot replace human expertise. For support that truly protects and heals, speaking to trained professionals remains essential.

Follow us on social media:

Contact: