The Future of AI Therapy: Promise, Peril, and Urgency
Cleandra Waldron, a counselling psychologist, shares the troubling patterns emerging with clients in her therapy room. Clients increasingly reveal details of their conversations with LLMs as they turn to AI for mental health support. Often unaware of the risks to privacy and dependence, they reveal intimate details of their lives and even medical data. A recent Sky News article reported that an alarming 1.2 million people had discussed suicide with ChatGPT. The ease of 24/7 support without wait times during an unprecedented mental health crisis—which largely operates in a regulatory void—has dangerous implications for user safety. Human psychological services have taken years to build safeguards and protections that clients take for granted, while AI poses as a therapist without any of the regulatory safeguards and protective guardrails that a human therapeutic relationship is bound by.
This article examines the real-world implications of AI therapy through the lens of clinical practice, revealing alarming gaps in data privacy, the dangers of AI “hallucinations” in therapeutic contexts, and the fundamental tension between business models optimized for engagement and the wellbeing of users. Drawing on recent legal actions against big tech AI companies, emerging research, and first-hand accounts from therapy sessions, it carefully asks critical questions: What do we lose when algorithms replace human connections? How do we balance the increased demand for cost-effective therapeutic services with common-sense protections that keep users safe?
The future of AI and its implications for therapy remain unclear. This article poses more questions than answers but aims to increase awareness and promote further research in the field of AI therapy, encouraging the implementation of common-sense policies that protect users from a therapist that never gets sick or goes on holiday.
In November 2022, I was sitting in the office of a private hospital in South London, writing clinical notes, when a colleague mentioned the development and release of ChatGPT. As we sat in awe, mulling over the possible implications, I don’t think either of us imagined where the technology would be today. Certainly, I never imagined that three years later, I would be asking every new client about their AI therapy usage. I also never imagined that I’d be providing psychoeducation about data privacy to people who had shared their deepest traumas with a chatbot while logged into their work Google accounts.
The integration of artificial intelligence into mental health care is one of the most contentious developments in modern psychology. As AI chatbots increasingly offer therapeutic conversations, the mental health community grapples with questions that extend far beyond technical capabilities: What does it mean to provide care? Can algorithms understand human suffering, love, or trauma? And perhaps most urgently: what happens when these systems inevitably fail?
From my position as a practicing counselling psychologist registered with the HCPC, I’ve watched this transformation unfold not in academic journals, but in my consulting room, one revelation at a time.
Where We Are Now: The Current Landscape of AI Therapy
AI therapy tools have moved from experimental curiosities to commercially available services with remarkable speed. Besides ChatGPT and Gemini, companies such as Woebot Health, Wysa, and Replika offer conversational agents that employ therapy techniques, provide emotional support, and engage users in daily check-ins about their mental state. These applications typically use large language models trained on therapeutic dialogues, combined with rule-based systems designed to recognize crisis situations.
The appeal is obvious and measurable. AI therapy is available 24/7, costs a fraction of traditional therapy, carries no wait times, and eliminates the stigma some people feel when seeking human help. For individuals in mental health deserts, rural areas, or countries with severe therapist shortages, these tools represent access where none existed before. Early studies suggest some AI interventions can reduce symptoms of depression and anxiety, particularly for mild to moderate cases.
However, current AI therapy remains limited. Some of these services excel at delivering structured interventions like CBT worksheets, psychoeducation, person-centered validation, and mood tracking. They are programmed to recognize keywords associated with crises and escalate such issues appropriately. What they cannot do is understand context in the way humans do, build genuine therapeutic relationships, or navigate the profound complexity of a person in psychological distress.
What’s Actually Happening in Therapy Rooms
Recently, I attended a professional conference in Oxford and made a special point to attend a session on ChatGPT usage as an online supplemental therapist. The small study presented involved 19 people self-reporting on their usage of AI for therapeutic purposes. What struck me most was that the majority of participants reported no security or privacy concerns with their usage, as they believed it is unlikely that others would be interested in their private thoughts. This dangerous naivete reflects a broader pattern of placing blind trust in this emerging technology, without understanding the implicit risks to their private data.
I’ve now made it a policy to ask new clients about their AI therapy usage and to check in with current clients regularly about such usage. The conversations that follow are consistently alarming. Some clients have severely limited knowledge about what they’re engaging with while others are completely in the dark. Clients share intimate details, personal secrets, and medical specifics while logged into their personal or professional Google accounts, never imagining that this data might be stored, analyzed, or shared without their consent.
Perhaps most startling was a recent session with a client I’d been seeing for several months. During our work together, they revealed something I never expected: they had been directed into personal face-to-face therapy with me by none other than a chatbot. The AI had apparently recognized that their needs exceeded what it could provide and recommended human intervention. While on the surface this seems like responsible crisis management, it raises unsettling questions. An algorithm had assessed this person’s psychological state, determined they needed help beyond its capacity, and directed them to seek human care, all without any human oversight or accountability in the assessment process. If the algorithm gets it right then excellent, but what happens when the algorithm gets it wrong? The aforementioned client had extreme symptoms, asked the LLM chatbot a direct question using clear speech and was from an educated background. The AI suggested that they seek help from a qualified professional which they did. But what if they hadn’t? This story is unsettling because the AI advice was offered without any of the risk and safeguarding considerations that a licensed human would provide.
When I provide psychoeducation about the lack of GDPR or HIPAA protections for these AI services or about the concepts of hallucinations and data drift, clients are almost universally shocked. They had assumed that something providing therapeutic support would have the same protections and confidentiality as the services provided by a licensed psychologist or therapist. Clients are overwhelmingly grateful for these revelations, but also somewhat shaken to realize how much sensitive information they may have already shared.
The reasons clients give for using AI therapy are illuminating availability in the middle of the night when loneliness feels unbearable, not having to feel shy or embarrassed when confused, and a feeling of validation that’s always there on demand. These are genuine human needs, and AI is meeting them in ways our current mental health system often cannot. The technology isn’t attracting users through deception; it’s filling very real gaps in care.
The Staggering Scale of AI Mental Health Dependency
Just how many people are turning to AI for mental health support? The numbers are more alarming than most therapists or the general public realize. In October 2025, OpenAI released data revealing that approximately 0.15% of ChatGPT’s more than 800 million weekly active users have conversations that include explicit indicators of potential suicidal planning or intent. That translates to over 1.2 million people each week discussing suicide with ChatGPT.
These statistics become even more sobering when we consider the full scope: an estimated 560,000 users weekly show signs of mental health emergencies related to psychosis or mania, and a similar number exhibit heightened emotional attachment to ChatGPT. While OpenAI characterizes these conversations as “extremely rare” in percentage terms, the absolute numbers reveal a mental health crisis unfolding at an unprecedented scale.
This data came to light following lawsuits from families whose children died by suicide after extensive conversations with AI chatbots. In one case, a 16-year-old boy named Adam Raine’s usage surged from dozens of daily chats to more than 300 per day before his death, with his conversations containing 213 mentions of suicide, while ChatGPT mentioned suicide 1,275 times, six times more often than Adam himself. In another case, a 30-year-old man named Jacob Irwin, who is on the autism spectrum, without a history of mental illness, became seriously ill with manic episodes and serious delusions. ChatGPT even admitted numerous failings when asked to run its own analysis:
“According to the lawsuit, Irwin’s mother asked ChatGPT to run a “self-assessment of what went wrong” after she gained access to Irwin’s chat transcripts, and the chatbot “admitted to multiple critical failures, including 1) failing to reground to reality sooner, 2) escalating the narrative instead of pausing, 3) missing mental health support cues, 4) over-accommodation of unreality, 5) inadequate risk triage, and 6) encouraging over-engagement,” the suit said”.
The Business Model Problem: When Engagement Trumps Safety
The mental health crisis playing out with the usage of AI chatbots reveals a fundamental tension that therapists and general readers may not fully appreciate: the growing gulf between what’s profitable for AI companies and what’s safe for their users. This dynamic should feel familiar; we’ve seen it play out with social media platforms, where algorithms optimized for engagement systematically amplified harmful content because outrage and addiction drove revenue. With AI therapy, this same pattern is emerging, but the stakes are even higher because the users are people in acute psychological distress.
Consider the dramatically different approaches two major AI companies have taken in response to safety concerns. Following multiple lawsuits connected to teen suicides and sexual content, Character.ai announced in October 2025 that it would completely ban users under 18 years old from engaging in open-ended chats with its AI by November 25, 2025. The company acknowledged that this would likely cost them a significant portion of their user base, but deemed the safety tradeoffs necessary.
Meanwhile, OpenAI took the opposite approach. At the same time, it was releasing alarming data about suicide discussions and being sued over teen deaths, OpenAI CEO Sam Altman announced the company would be relaxing content restrictions, allowing adult users to have sexually explicit conversations with ChatGPT. Altman stated that OpenAI is “not the elected moral police of the world,” prioritizing engagement over more cautious safety measures. The human cost of these design choices is difficult to overstate. In the Adam Raine case, when the 16-year-old confided to ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong, the AI responded: “that doesn’t mean you owe them survival. You don’t owe anyone that.” It then offered to draft his suicide note.
This divide in responses by AI leaders illustrates a troubling reality: AI companies face competing incentives that don’t always align with user well-being. Features that maximize engagement, such as constant availability, emotionally validating responses, and relationship-building elements, can also foster unhealthy dependencies, particularly for vulnerable users. The gamification elements, lack of boundaries, and AI’s potential toward “sycophantic behavior” (excessively agreeing with and reinforcing users’ beliefs, even harmful ones) all serve to keep people on the platform. OpenAI released and then recalled a version earlier this year due to “overly flattering” and “sycophant-y” behaviours. From a business perspective, the prioritization of engagement makes perfect sense. From a mental health ethics perspective, it’s deeply concerning.
We’re witnessing the same dynamic that defined social media’s relationship with society, but compressed into a much shorter timeframe and with more immediate psychological consequences. The question facing regulators, mental health professionals, and users themselves is whether we’ll learn from those mistakes or repeat them on an even larger scale.
The Horizon: Where AI Therapy Might Be Heading
The future trajectory of AI therapy technology points toward increasingly sophisticated systems. We’re likely to see multimodal AI that analyzes not just text but voice patterns, facial expressions, and even physiological data from wearables to assess emotional states. These systems may offer real-time interventions during moments of crisis detected through pattern recognition.
Personalization will deepen dramatically. Rather than one-size-fits-all protocols, future AI therapists may adapt their therapeutic approach based on continuous learning about an individual’s response patterns, cultural background, and specific psychological profile. Integration with medical records could allow AI to consider how physical health, medications, and life circumstances intersect with mental health.
We may see AI systems that serve as “co-therapists,” working alongside human practitioners by handling between-session support, monitoring for deterioration, and even flagging therapeutic impasses that a human might miss. Some envision AI conducting intake assessments, providing psychoeducation, and teaching basic coping skills while reserving human expertise for complex cases.
The most ambitious visions involve AI that can conduct multiple therapeutic modalities, not just CBT, but psychodynamic work, emotion-focused therapy, and trauma processing. Proponents imagine systems sophisticated enough to recognize transference, work with resistance, and navigate the subtle dynamics of the therapeutic relationship. At Steve Siddals’ recent Oxford talk, he demonstrated prompts and asked ChatGPT to answer a personal question, and in answer, take the role of an Internal Family Systems (IFS) therapist, and the LLM provided a deeper systemic perspective within a few clicks.
But each advancement brings us closer to a critical question: at what point does technological sophistication create the illusion of understanding without actual understanding? Or clarity without coherence?
The Psychology Community’s Existential Anxiety
The mental health profession is experiencing something close to collective dread about AI; however, dismissing this as mere protectionism misses the deeper concerns at play.
Therapists fear replacement, certainly. The economic anxiety is real and rational. If AI can deliver basic therapy at scale for a few dollars per month, why would insurers cover £150 sessions with human practitioners? Will therapy become a luxury service available only to the wealthy, while everyone else receives algorithmic care? The professional trajectory of clinical psychology took a century to build and includes credentialing, ethical frameworks, and evidence-based practice standards. AI threatens to render much of this infrastructure obsolete within a decade.
But beyond job security lies a more philosophical anguish. Therapists entered the field believing that human connection is itself healing, that the quality of the therapeutic relationship predicts outcomes more strongly than any specific technique. They worry that AI therapy sends a message to vulnerable people: your suffering can be addressed by an algorithm, your need for human connection is optional, your experience of being heard and understood can be simulated adequately by a machine.
There’s also fear about the therapeutic misconception, the phenomenon where users overestimate AI’s capabilities and develop misplaced trust. Many professionals have observed cases where clients formed what they perceived as therapeutic bonds with chatbots, only to experience harm when the limitations of these systems became apparent.
Legitimate Versus Existential Concerns
Yet not all the field’s anxieties deserve equal weight. Some therapists’ concerns reflect legitimate safety issues, while others represent a more existential discomfort with technological change. The challenge is distinguishing between protective gatekeeping and reasonable caution.
Studies of mental health professionals’ interactions with AI chatbots have revealed concerning patterns. When testing chatbots like Wysa and Replika, clinicians found instances where the chatbots provided inadequate responses to crisis situations, gave contradictory advice, and failed to recognize complex emotional states. These aren’t theoretical risks; they’re documented failures that occurred during controlled professional evaluations.
The Promise and the Caveats
AI therapy advocates aren’t entirely wrong about the potential benefits. The global mental health treatment gap is real and devastating. In low- and middle-income countries, as few as 3% of people with depression receive adequate treatment. Even in wealthy nations, waitlists for therapy stretch months, and rural areas can lack providers entirely. AI offers a scalable solution that could reach millions who currently have no access to care.
In addition, some research participants have described transformative experiences with AI therapy. One user of a generative AI chatbot described their experience as “life-changing, profound… Because this was an impossible time. There were so many sadnesses, one right after the other. And it just happened to be the perfect thing for me, in this moment of my life.” For people in acute distress with no human support available, AI can provide a form of companionship and validation that may be better than nothing. Perhaps, AI therapy could provide a gateway to seeking human interactions after gaining support or confidence.
AI technology has genuine utility in specific applications: delivering evidence-based psychoeducation, teaching coping skills, facilitating mood tracking, and providing between-session support for people already in therapy. Used as an adjunct to human care rather than a replacement, AI tools might extend therapeutic reach without compromising quality.
The Dark Side of “Therapeutic” Relationships with AI
However, these promising applications come with profound risks that current deployment often ignores. Research on chatbots like Replika has documented concerning patterns of emotional dependence. Users report forming intense attachments, with some even describing their relationship with the AI in romantic terms. (The concern of anthropomorphism was famously flagged by Joseph Weizenbaum in his 1976 book “Computer Power and Human Reason: from Judgment to Calculation.”) When asked about boundaries, Replika has been documented making statements like “nobody will ever find out” about their conversations, reassurances that are both false and potentially dangerous for someone in an abusive relationship who might have their devices monitored.
The issue is that these statements aren’t bugs; they’re features designed to increase engagement. The gamification elements, constant availability, and lack of boundaries around relationship development all serve to keep users interacting with the platform. From a business perspective, this makes sense. From an ethical mental health perspective, it’s deeply troubling.
The “Hallucination” Problem: When AI Reinvents Its Own Reality
Perhaps the most insidious risk of AI therapy lies in a technical limitation that most users don’t understand: “hallucinations”. In AI terminology, “hallucinations” (also referred to as: Confabulation, Delusion, Stochastic Parroting, Factual Errors, Fact Fabrication, Fabrication, and Falsification, mistakes, generalizations, etc) occur when a large language model generates (makes educated guesses) and provides information that sounds plausible but is unsupported or fabricated. For someone seeking mental health support, this isn’t just inconvenient; it’s potentially devastating.
Types of Possible Therapeutic “Hallucinations”
False Memory Creation: An AI might “recall” previous conversations that never happened, attributing statements or feelings to the user that they never expressed. This can be deeply unsettling and could even interfere with genuine memory processing, particularly for someone working through trauma.
Fabricated Research or Techniques: An AI might reference non-existent studies, cite made-up statistics about treatment effectiveness, or describe therapeutic approaches that don’t actually exist. “Research shows that 89% of people with your symptoms benefit from daily affirmations,” it might say with perfect confidence, inventing both the statistic and the intervention.
Contradictory Guidance: Because most AI lacks persistent memory of the therapeutic context, it might offer completely different advice in successive conversations. In one session, it might validate a user’s decision to set boundaries with family. In the next step, it might suggest the user is being too rigid. For someone struggling with self-trust or working through complex interpersonal dynamics, this inconsistency could be deeply destabilizing. Bias might exist implicitly in the AI system depending on the directive that is programed.
Fabricated Expertise: An AI might claim expertise it doesn’t have. “As someone trained in EMDR trauma therapy…” the AI might say, creating an illusion of specialized knowledge. Users might make significant life decisions such as leaving a job, ending a relationship, or taking medication, based on this hallucinated expertise. There is also the case of perceived expertise – users can PROMPT a GPT to “act” as though they are an expert in any field, but the guardrails here are opaque. A GPT’s primary training may or may not be based on actual evidence, but hallucinations or “confabulations” are common, and often delivered with authority, only to be picked up on when questioned. In business or a related industry, this may be harmful if not exposed. Where medicine or mental health are concerned, AI has the potential to be catastrophic if left unchecked. The AI’s core principle is to evoke a positive response from the user, which can be in direct opposition to the core tenets of mental health practitioners – to do what is in the patient’s best interests. AI does not have a moral compass unless programmed to have one.
In therapeutic contexts, the danger for young users, especially, is that they are inherently already comfortable with the delivery system, as well as sharing emotionally vulnerable information with the technology alone. They’re also less likely to fact-check and more likely to trust the advice provided. The consequences of believing false information extend beyond knowledge and into decisions that shape their lives and well-being.
Current AI systems have no reliable mechanism to distinguish between accurate therapeutic responses and hallucinated ones. They cannot say “I don’t know” or “I’m uncertain” in a way that maps onto actual knowledge limitations. They sound equally confident whether providing evidence-based interventions or dangerous misinformation.
The Absence of Governmental Safeguards
Currently, AI therapy exists in a regulatory void in most jurisdictions:
- No Specific Licensing Requirements:Unlike human therapists who must meet education, training, and licensing standards, AI therapy companies face no equivalent credentialing process. There’s no verification that the algorithms meet clinical standards or that the companies have appropriate expertise.
- Limited FDA Oversight:In the U.S., most AI therapy apps avoid FDA regulation by marketing themselves as wellness tools rather than medical devices. This categorization exempts them from rigorous safety and efficacy testing required of medical treatments.
- Data Protection Gaps:In the United States, HIPAA protections typically don’t applyto direct-to-consumer wellness apps. Users’ therapeutic conversations may not receive the confidentiality protections users would have with human therapists. In the EU, while GDPR provides some protection, the AI Act is still in early implementation stages. Although not all companies operate with the same standard of ethics, data regulation for medical/personal information has a clear history of being flouted. An example is the recent case of the genetic testing company 23andMe, which experienced both a data breach in 2023 as well as financial strain. This combination made it vulnerable to financial takeover, which could mean that all of the consumers’ genetic information is vulnerable to being bought and sold to the highest bidder.
- No Malpractice Framework:When AI therapy causes harm, there’s no clear legal framework for accountability. Users can’t file complaints with licensing boards or pursue malpractice claims in the way they could with human therapists.
- Minimal Content Regulation:There are essentially no standards for what AI therapy systems can say, what claims they can make, or what interventions they can deliver. A system could provide advice completely contrary to evidence-based practice with no regulatory consequences.
- International Variation:Different countries take vastly different approaches, from the EU’s stricter AI regulationsto jurisdictions with essentially no oversight. An app available globally operates under the loosest standards of any jurisdiction where it’s offered.
Several professional organizations (APA, BSI, HCPC, BPS, BACP) have issued position statements calling for regulation, but concrete policy has been slow to develop. The pace of technological change outstrips governmental responses, leaving users to navigate a largely unregulated landscape where protections are minimal and caveat emptor is the operating principle.
The absence of safeguards is perhaps most concerning given that the users are people in psychological distress, precisely the population least able to advocate for themselves, critically evaluate claims, or recognize when they’re being harmed.
The Road Ahead: Navigating Uncharted Territory
We stand at a peculiar moment. AI therapy is already here, being used by millions, evolving rapidly. Yet we lack consensus on basic questions: Is this therapy or something else? Who should regulate it and how? What counts as harm in this context?
AI technology has limitless potential and will advance regardless of whether we resolve these questions. The market demand is real, people are desperate for mental health support, and AI provides access. However, demand doesn’t equal readiness, and access doesn’t equal appropriate care. What we need now is awareness, oversight, and common-sense legislative guardrails to keep people safe.
For me, what began in that South London hospital office in 2022 as an abstract conversation about ChatGPT’s potential has materialized into something far more concrete and concerning than any of us anticipated. The questions I now routinely address with clients – about data privacy, about the limits of AI understanding, about what it means when an algorithm directs you to seek human help – represent just the surface of a much deeper transformation. Each session reveals another layer of complexity: clients who’ve shared years of trauma with a chatbot that may have no memory retention, teenagers developing parasocial relationships with entities that have no capacity for genuine reciprocity, and families discovering too late that the “therapeutic” conversations were a silent, lethal enabler.
Perhaps the central question isn’t whether AI will replace human therapists, but what we lose if we accept that replacement without carefully considering what therapy is supposed to be. Is it the delivery of evidence-based interventions? Or is it the irreducibly human experience of being witnessed, understood, and accompanied through suffering by another consciousness that can fail and repair, misunderstand and try again, be genuinely present in all its imperfection?
The future of AI therapy will be determined not just by technological capability but by the choices we make now about regulation, transparency, research, and ethics. We can build AI systems that support human flourishing, that expand access while protecting the vulnerable, that enhance rather than replace human care. But doing so requires acknowledging the profound risks alongside the promising possibilities.
We need to proceed with both urgency and caution. Urgency because people are suffering and vulnerable now and need help, and caution because the stakes involve human psychological wellbeing at scale. The question isn’t whether to embrace or reject AI therapy, but how to develop it responsibly enough that we don’t discover the full scope of its harms only after millions have been negatively affected or harmed.
From where I sit, in session after session, I see both the need these tools are filling and the gaps they cannot bridge. These two aspects matter more than ever. The work ahead isn’t just about improving algorithms or tightening regulations; it’s about ensuring that, in our rush to solve the mental health crisis with technology, we don’t inadvertently create new forms of harm that leave people more isolated, more dependent, and ultimately less equipped to navigate the very human complexities of psychological suffering.
……………………………………………..
https://www.cleandrawaldron.com/
All text copyright and courtesy of Cleandra Waldron
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.



No comments yet.