Click here to schedule a demo with a client advisor to learn more about CliniScripts

The Therabot Trial in 2026: What Happened When AI Behavioural Therapy Beat Antidepressants

In March 2025, researchers at Dartmouth published results that made mental health professionals pause mid-scroll.

A randomized controlled trial showed that people using an AI therapy chatbot called “Therabot” experienced a 51% average reduction in depression symptoms. The effect sizes exceeded those commonly reported for SSRIs in clinical trials and approached the results of traditional human therapy.

This wasn’t a tech company press release. This was peer-reviewed clinical research.

The study raises questions that extend beyond whether AI works. It forces a harder conversation about what works means in behavioural therapy and behavioral health, and what happens when technology solves the access problem but creates new ones we haven’t prepared for.

 

 

The Access Problem AI Actually Solved

Over 100 million people worldwide now use AI chatbots for mental health support. The market for AI-powered behavioural therapy reached $992.1 million in 2025 and projects to hit $2.7 billion by 2035.

The growth reflects a brutal reality: most people have a six-week window of motivation to seek mental health services. If they can’t secure a first appointment within that window, motivation drops significantly.

Traditional therapy faces obstacles that AI sidesteps entirely:

  • Stigma disappears when you’re texting a chatbot at 2am
  • Cost barriers vanish with free or low-cost apps
  • Geographic restrictions become irrelevant
  • Waitlists compress from weeks to seconds

AI didn’t just make therapy more convenient. It made therapy available when human systems couldn’t scale fast enough to meet demand.

 

 

What the Clinical Data Actually Shows

The Dartmouth trial wasn’t an isolated success story.

A separate study comparing AI chatbot interventions to bibliotherapy found statistically significant improvements in both depression and anxiety scores, with effect sizes ranging from moderate to large (Cohen’s d = 0.6–0.8). Participants in the chatbot group also reported significantly higher Working Alliance Inventory scores, suggesting they felt a stronger connection and engagement with the AI than with self-help reading materials.

Machine learning models now analyze therapy session transcripts, patient feedback, and progress over time to help therapists personalize their approaches. AI evaluates risk factors for developing specific mental health disorders based on individual profiles, genetic predisposition, and environmental factors.

In September 2025, CMS established a new billing code for “Remote Therapeutic Monitoring” of CBT, allowing therapists to be reimbursed for reviewing patient data from digital behavioural therapy apps. This solved a major reimbursement barrier that had prevented adoption.

The infrastructure is building. The evidence is accumulating. The question isn’t whether AI therapy works anymore.

The question is: what are we missing?

 

 

The Gaps Between Efficacy and Safety

A Stanford study revealed findings that complicate the narrative.

AI therapy chatbots may contribute to harmful stigma and dangerous responses. While they successfully decrease anxiety in controlled settings, the personal engagement and emotional support offered by a human therapist provide more profound therapeutic effects, especially in high-stress or crisis circumstances.

The difference likely stems from direct human connection and the therapeutic relationship itself, elements that AI can simulate but not replicate.

More concerning: only 4% of the 10,000+ mental health apps available today have demonstrated clinical efficacy. Provider recommendations remain modest, as does provider knowledge about these tools and follow-up with patients about their use.

The adoption is largely passive. Patients download apps. Providers shrug. No one tracks outcomes systematically.

The Data Privacy Challenge

AI systems in mental healthcare require access to sensitive patient data: medical records, treatment histories, real-time emotional states. Safeguarding this information remains crucial to prevent unauthorized access or breaches.

AI algorithms can also inherit biases present in the data they’re trained on, leading to potential disparities in diagnosis and treatment recommendations. A system trained primarily on data from one demographic may fail to recognize symptoms that present differently in other populations.

These aren’t theoretical concerns. They’re active risks that scale with adoption.

 

 

The Hybrid Model That’s Actually Working

The most promising results in behavioural therapy don’t come from AI replacing human therapists or humans ignoring AI tools.

They come from combining both.

Research shows that an AI-enabled, personalized behavioural therapy support tool used alongside human-led group therapy improves both efficacy and adherence to mental health care. Patients complete homework exercises with an AI chatbot between in-person behavioural therapy sessions, elevating treatment intensity and leading to better outcomes.

The AI handles:

  • Data collection and analysis
  • Real-time feedback and support between sessions
  • Personalized treatment plan adjustments based on individual responses
  • Immediate access during moments of acute need

The human therapist focuses on:

  • Complex emotional processing
  • Therapeutic relationship building
  • Crisis intervention
  • Nuanced clinical judgment

This division of labor doesn’t diminish the role of human therapists. It amplifies their impact by removing administrative burden and extending their reach between sessions.

What We’re Learning About Implementation

The Therabot trial and subsequent research teach us several lessons about integrating AI into behavioural therapy:

Training matters more than technology. Both therapists and patients need proper onboarding. Without it, tools sit unused or get misused.

Measurement drives improvement. The systems that track outcomes systematically show better results than those that don’t. You can’t optimize what you don’t measure.

Privacy must be proactive, not reactive. Data protection protocols need to be built into the foundation, not added later as a patch.

Bias detection requires constant vigilance. Algorithms need regular audits to identify and correct disparities in how they serve different populations.

Human oversight isn’t optional. AI should augment clinical decision-making, not replace it. The therapist remains responsible for treatment decisions.

The Question That Remains

The Therabot trial proved that AI therapy can produce clinically significant results. The market growth proves that people want it. The hybrid model proves that integration is possible.

What we haven’t proven yet is whether the mental health field can implement AI responsibly at scale.

The technology moved faster than the ethics, faster than the regulation, faster than the training infrastructure. Now we’re catching up in real-time while millions of people use these tools.

The lesson isn’t that AI therapy works or doesn’t work. The lesson is that efficacy without safety infrastructure creates new problems while solving old ones.

We have the clinical evidence. We have the technological capability. What we need now is the implementation framework that ensures AI serves as a tool to enhance human care, not a shortcut that bypasses it.

The Therabot trial opened a door. What walks through it depends on the decisions we make next.

CliniScripts - Logo

Our website is compliant with the Accessibility for Ontarians with Disabilities Act (AODA). If you have any suggestions for improvement, please contact us.

Copyright Icon All Rights Reserved

Follow on Social Media