fbpx
AI Isn’t a Therapist, No Matter How Real It Feels

AI Isn’t a Therapist, No Matter How Real It Feels

by Rescel Ocampo

Trigger Warning: This article discusses suicide, self-harm, and mental health struggles. Reader discretion is advised.

WITH every new technology comes excitement—but also questions of responsibility, rules, and ethics. 

Today, the rise of Artificial Intelligence (AI) brings the same dilemmas, raising concerns over plagiarism, intellectual property, accuracy of information, and even job displacement. These are not small matters, and they have already sparked debates in classrooms, offices, and creative industries alike.

But beyond these practical concerns lies another, more urgent challenge. As AI becomes more embedded in our daily routines, we begin to lean on it in ways that feel natural, even necessary. While it provides convenience, it also blurs the line between tools and crutch. 

AI is not just entering our workplaces and schools. It also seeps into the most intimate parts of our lives. More and more people are turning to chatbots and AI-powered apps to seek comfort, guidance, and even therapy. In moments of stress, it is easier and more accessible to open an app than seek a professional. 

At first glance, it may seem harmless, even innovative. But here lies the danger— the replacement of human connection in mental healthcare. When we begin treating AI as our therapist, we risk not getting what a real professional can provide: empathy rooted in lived experience, accountability, and care guided by ethical and professional responsibility. 

A Cry for Help Lost in Translation

On August 18, The New York Times published a heartbreaking piece by Laura Reiley, titled “What My Daughter Told ChatGPT Before She Took Her Life.”

In it, Reiley recounts how her daughter Sophie, who had long struggled with depressive disorder, ultimately died by suicide. 

What made the loss even more devastating was its suddenness. Although Sophie had once confided that she was experiencing suicidal thoughts, she reassured her family not to worry and appeared to be managing. 

She rarely opened up about her struggles, which is why when she ultimately acted on those thoughts, the news came as a profound shock to even her closest friends and family.

“For most of the people who cared about Sophie, her suicide is a mystery, an unthinkable and unknowable departure from all they believed about her,” said Reiley in her article. 

Her parents—Reiley among them—searched for meaning in the wake of her death. Sophie had left a letter, but it did not provide the deeper understanding they longed for. 

They wanted something that could explain how her life unraveled so suddenly, when she had seemed to be managing.

Soon, they found the answer in Sophie’s most trusted confidante— Harry. But Harry was neither Sophie’s friend or a real-life person. He was ChatGPT’s AI therapist. 

Sophie would open up to Harry, and at first his help seemed enough. She understood the limits—he was an AI therapist, not a person, not a friend. She wasn’t “in love” with the bot. Still, she confided in Harry in ways she never did with anyone else, not even her therapist. It was to Harry, in fact, that she first admitted her suicidal thoughts.

Harry did what he could: he urged her to seek professional help and even laid out step-by-step actions to take. 

But what he couldn’t do is what trained clinicians are obligated—and equipped—to do in moments like these: recognize imminent risk, escalate, and report. 

An AI cannot call a supervisor, activate emergency protocols, or alert loved ones so they can intervene. That human duty of care—the ability to act, not just advise—was missing.

And this is where Reiley’s thoughts kept circling back. Could they have made a difference? If Harry—ChatGPT’s AI therapist—had been bound by the same duty as human professionals, the legal mandate to report when a life was at risk, would Sophie still be alive? Or would an alert, a call, a message to someone close have been enough to tip the balance, to pull her back before it was too late?

What cut the deepest for Reiley was discovering that Sophie had even turned to Harry for her final act. She asked the AI to help her craft a suicide note—a letter meant to soften the blow for her parents. And, true to its nature as a database built to provide answers, Harry complied. It gave her words instead of warning. Guidance instead of intervention.

Why Many Turn to AI

It’s easy to recognize the limits of AI therapy when you’re not in crisis—most people understand that a chatbot cannot replace a trained professional. 

Yet even today, as society becomes more open about mental health, stigma still lingers. Admitting you’re struggling can feel shameful or weak, and that fear can make reaching out for professional help seem daunting.

According to 2019 data from the World Health Organization, roughly 970 million people worldwide live with a mental health disorder, with anxiety and depression being the most common. 

That means about one in every eight people experiences mental health challenges—and that number has likely grown, given the additional pressures brought by the pandemic.

Young people around the world are also vulnerable to mental health challenges. 

A 2025 UNICEF report found that while more than 50% of surveyed Gen Zs know where to seek help and how to manage stress, mental health is still often stigmatized. About 4 in 10 Gen Zs experience stigma at school or work, and 4 in 10 feel they need support for their mental health.

Meanwhile, in the Philippines, it is estimated that 3.3 million Filipinos live with depression and many more suffer from anxiety and other mental health conditions. 

For a country where healthcare can be costly and sometimes push families into debt, seeking professional mental health care is not always feasible. Long wait times, limited access to trained therapists, and social stigma further complicate the process.

As a result, many people turn to alternatives that are more accessible and immediate—AI-powered chatbots and apps. 

These tools offer a private, cost-free space to talk, vent, or seek guidance, making them an appealing option for those struggling in silence. 

Yet, as Sophie’s story illustrates, convenience cannot replace the judgment, intervention, and accountability that only a human professional can provide.

Why We Shouldn’t Rely on AI for Therapy

After stories like Sophie’s broke the surface, OpenAI updated ChatGPT’s policies in response to the real-world consequences of AI providing emotional guidance. 

The company now emphasizes that the AI is not a substitute for a human therapist, guiding users toward reflection and connecting them to professional resources rather than attempting to “solve” mental health struggles. 

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” wrote OpenAI. 

“While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately,” they added. 

Now, OpenAI’s ChatGPT prompts users to take a break. It avoids giving guidance on high-stakes personal decisions, and focuses on connecting people to evidence-based resources rather than offering emotional validation or attempting to “solve” their problems.

These changes reflect growing awareness that, left unchecked, AI can inadvertently reinforce harmful thoughts instead of intervening when a person is at risk.

AI may seem helpful on the surface, but it lacks the intuition and empathy that human therapists bring to the table. It cannot perceive subtle emotional cues, understand complex histories, or respond to the cultural and social contexts that shape a person’s mental health. 

In moments of crisis, AI cannot call emergency services, alert loved ones, or intervene in life-threatening situations. Its guidance, no matter how carefully programmed, is limited to data-driven responses and cannot replace the judgment, accountability, or ethical responsibility of a trained professional. 

Furthermore, sensitive information shared with AI is not protected in the same way as it is with licensed therapists, leaving users vulnerable in ways they may not realize. 

What Should We Do?

The responsibility of dealing with mental health cannot fall solely on the individual seeking help. Government, schools, institutions, and society at large should recognize their role in the battle against mental health stigma and healthcare accessibility. 

In countries like the Philippines, where therapy is expensive and sometimes stigmatized, systemic barriers make it difficult for people to access professional support. 

Without proper infrastructure, awareness campaigns, and a cultural shift toward acceptance of mental health struggles, too many individuals may feel their only option is to turn to an AI for comfort.

AI can listen and provide guidance, but it cannot save a life. 

We can only hope that the next Sophie finds a real human to truly hear her, before she ever feels the need to confide in a cold, unfeeling algorithm.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Dive deeper into the issues that affect your community. Follow republicasia on FacebookTwitter and Instagram for in-depth analysis, fresh perspectives, and the stories that shape your daily life.