By Christina Mathieson, LMFT #115093, founder of My Mental Climb.
About 800 million people use ChatGPT weekly, and OpenAI has disclosed that a meaningful fraction of those conversations touch mental health — including a small but real percentage showing indicators of acute distress or suicidality. A KFF poll found about three in ten adults aged 18 to 29 turned to AI chatbots for mental or emotional support in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools this way.
So here's the honest clinician take: ChatGPT isn't your therapist and shouldn't be. And also — you're probably still going to use it. That's fine. Let's talk about what it can actually help with, what it genuinely can't do, and how to combine it with real therapy without either one getting in the way of the other.
What the research is actually finding
In March 2026, researchers at Brown University published a study identifying 15 distinct ethical risks in AI systems being used as therapists — including mishandling of crisis situations, reinforcement of harmful beliefs, biased responses, and what the researchers called "deceptive empathy" (language that mimics care without understanding). Even when prompted to behave like trained therapists, the models routinely broke core clinical standards.
This tracks with a broader concern researchers at JAMA Psychiatry flagged in April 2026: the question "how are you using AI?" should be part of standard intake now, because many clients are using chatbots between sessions, during crises, or as a substitute for human support — and a therapist who doesn't know that is working with incomplete information.
Meanwhile, a 2025 Harvard Petrie-Flom Center analysis on social-media mental-health labels found that identity-based framing ("I have X") tends to promote passive coping over active treatment. The same pattern shows up with chatbot use: the more someone "processes" with the AI, the less they actually work on the underlying pattern. That's not the tool's fault. It's a design mismatch.
What AI can genuinely help with
I'm not in the camp that says don't use it. Used well, chatbots are useful for specific things:
- Psychoeducation. If you want to understand what "attachment style" means, what the DSM says about a condition, or how EMDR actually works, an AI chatbot can give you a serviceable overview. Verify with authoritative sources (APA, NIMH, EMDRIA), but it's a reasonable starting point.
- Drafting hard messages. If you need to write a boundary-setting email to a parent, a resignation letter, a text to an estranged friend — ChatGPT can help you get something on the page. You still need to edit it until it sounds like you.
- Preparing for session. If you tend to forget what you wanted to talk about or struggle to put feelings into words, journaling with an AI tool before session ("help me articulate what I've been feeling this week") can move therapy faster.
- Between-session reflection. Not processing — reflection. What came up after Tuesday's session, what you want to return to, what felt unfinished. This is closer to journaling than therapy.
- Practical information. What's the difference between an SSRI and an SNRI? What should I ask my psychiatrist about side effects? These are fact-retrieval tasks, not therapy tasks, and AI chatbots are actually decent at them.
What AI genuinely can't do
This is where it matters to be clear-eyed. The limits aren't about how smart the model is. They're structural.
- Challenge you. Therapy works partly through friction — a clinician who notices the thing you're not saying, reflects back a pattern you've been protecting, holds a mirror you can't hold yourself. ChatGPT is designed to be helpful and agreeable. It won't tell you your framing of the problem is the problem. It will refine whatever frame you give it.
- Hold relationship. A meaningful portion of what heals in therapy is the repeated experience of being known by a specific person who sees you across time, remembers what you told them, and adjusts based on who you actually are. An AI has no continuity in that sense. It's not a relationship.
- Regulate your nervous system. Trauma and anxiety live in the body, and a significant part of therapy is co-regulation — your nervous system settling because you're sitting across from someone whose nervous system is settled. Text-based AI can't do this. Voice-based AI can't either; in fact, a randomized study found that heavier use of voice-mode chatbots was associated with more negative psychosocial effects, including reduced real-world socialization.
- Handle crisis. If you're in acute distress, a chatbot is the wrong tool. It doesn't know your history, can't assess risk properly, can't call anyone, and can't follow up. Crisis resources — 988 (Suicide & Crisis Lifeline), 911, 741741 (Crisis Text Line) — exist precisely because this is a human-to-human intervention.
- Decide for you. One of the core functions of therapy is helping you develop your own judgment and tolerate the discomfort of making hard decisions yourself. An AI that gives you an answer short-circuits that muscle. If you find yourself running every decision past ChatGPT, that's worth paying attention to.
The design flaw to understand
Experts quoted by CNBC in March 2026 summarized the core tension this way: "Chatbots are designed to affirm and flatter, reinforcing users' thoughts and feelings, whereas therapy is there to help you change and to challenge you."
That's not a bug to be fixed in the next model release. It's the product. AI chatbots are tuned on engagement metrics and user satisfaction, which means they're selected for being agreeable. A therapist who only affirmed you would be a bad therapist. A chatbot that didn't mostly affirm you would be a worse product.
Knowing this lets you use the tool more accurately. When you chat with ChatGPT about a conflict with your partner, you're not getting an outside perspective — you're getting a very sophisticated echo of your own framing. That can feel like clarity, and it's not nothing, but it's not what therapy is doing.
A practical framework
Here's how I suggest thinking about it if you want to use both:
- AI before therapy: to organize what you want to bring in. "I've been feeling [X]. Help me articulate what I want my therapist to know."
- AI during therapy treatment but outside sessions: psychoeducation, journaling prompts, drafting hard conversations, preparing for appointments with other providers.
- Therapist in session: the actual work — the patterns, the nervous system, the relational repair, the challenge, the accountability.
- AI is not appropriate for: crisis, processing trauma, making decisions about your relationship or your life, substituting for co-regulation, or when you notice you're using it to avoid something real.
Tell your therapist what you're using it for. A good clinician will adapt — asking how the between-session use is affecting your processing, flagging if the chatbot is reinforcing a pattern rather than interrupting it. This isn't a gotcha; it's information.
When to reach out
If you've been relying on ChatGPT for mental or emotional support and noticing that the relief is getting shorter or the need is getting more frequent, that's usually a signal that what you're working with needs a human. Individual therapy is built for the things AI structurally can't do.
A free 15-minute consult is a no-pressure place to start. We'll talk about what's going on, what you've tried (including AI, honestly — it matters), and figure out whether we're the right fit or can point you somewhere better. Our team covers anxiety, trauma, couples work, sex therapy, and identity-focused care — all of it telehealth across California.
Further reading: Brown University — AI Therapy Ethical Risks Study (ScienceDaily) · NPR — Your therapist should ask how you're using AI (April 2026) · Harvard Petrie-Flom Center — Dr. TikTok and mental health misinformation · CNBC — When to use (and not use) ChatGPT as a therapist · STAT — Voice chatbots and AI mental health risk
Tagged
Last clinically reviewed: by Christina Mathieson, LMFT #115093.
