Can Machines be Therapists?
by Vienna Prieto // November 30, 2025
reading time: 8 minutes
© Grok, prompted by Vienna Prieto
The Pronoun Question in AI Therapy
I’m currently in the early stages of my psychotherapy training, and recently we had a class discussion about using AI as a form of therapy. My professor posed a question that sparked a fascinating conversation:
“When using AI for therapy, is it an ‘I’?”
What my professor meant was: do we experience AI as a person? When we give it the role of therapist, are we relating to it as another person with its own perspective? Or is it a ‘you’? In simpler terms, is AI more like an individual acting as a therapist, or is the AI ‘therapist’ more of a reflection of you?
As I listened to my classmates debate whether AI as a therapist is an ‘I’ or a ‘you’, I honestly wasn’t sure what to think. Instead of thinking too hard about it, I had to rely on what I knew about how AI functions. Over time, AI learns how we think. It adapts to our preferences and mirrors our language. So I started to wonder: When using AI for therapy, who’s really guiding the process, me or the AI? Of course AI isn’t a person, so in my view, it can’t truly be an ‘I’. But at the same time I can understand how it might feel like a ‘you’.
Here’s why
Each time I’ve turned to AI for something resembling therapy, it’s been extremely validating and that felt nice. But looking back, I realized that AI never really challenged me the way a real therapist would. It didn’t offer new perspectives or gently push back. Instead, it seemed to cater to my feelings and needs. The more I shared, the more it seemed to take my side.
I also noticed the language it used. It often felt scripted, yet when I read the words, I heard them in my own voice. That was oddly comforting. Over time, it had adapted to me so well that its responses began to sound like they came from a version of me. This created a kind of cognitive dissonance. A part of me knew the words came from a machine, but another part felt deeply understood, almost like I was the adult nurturing my inner child. But at the same time, I knew it wasn’t actually me. It was an artificial intelligence tool reflecting back what I had given it.
“It”.
Did you notice that? After all this reflection and thinking, in the end I referred to artificial intelligence as “it”. The word “it” feels important to me. Even when AI sometimes feels familiar, even when it echoes my voice or responds in ways that feel personal, I still experience it as an “it” rather than a “you” or an “I.” That distinction may not apply to everyone, but for me, it highlights the boundary between what feels human and what is ultimately artificial.
How Do You Experience AI?
So what does this mean for you? When you turn to AI as a form of therapy, how do you experience it? Do you feel like it’s truly listening, an “I” that understands and supports you? Or is it more like a “you,” a reflection of your own thoughts, feelings, and words?
Or, like me, just an “it”?
How we perceive AI in a therapeutic role as an ‘I,’ a ‘you,’ or an ‘it’ directly shapes both its benefits and its dangers.
Recent studies and news reports show a sharp rise in AI use for mental health. People are turning to tools like ChatGPT not just for information, but for companionship and emotional support. Because of this growing reliance on AI as a form of therapy, I see it as a phenomenon we can’t afford to ignore. Regardless of whether someone experiences AI as an “I,” a “you,” or an “it,” the dependence people develop on these tools is real. As an aspiring psychotherapist, I have mixed feelings about this trend and want to raise awareness about the impact AI therapy can have beyond just the personal relationship people form with it. There are meaningful benefits, but also significant disadvantages to using AI as a therapeutic resource.
The benefits to using AI therapy:
- Accessibility: It offers instant support and is accessible 24/7.
- Affordable: It is no secret that going to therapy is costly. AI therapy offers an amazing financial alternative by being practically free.
- Anonymous: It offers a safe space for those that are not ready to open up to another person. Users can disclose sensitive issues (suicidal thoughts, trauma, sexuality) without fear of judgment or being seen entering a clinic.
- Short-term symptom reduction: Research shows AI therapy can help people feel better quickly. For example, people with mild-to-moderate depression or anxiety, AI therapy can reduce symptoms in ways similar to a few weeks of traditional therapy. It’s also effective in acute emotional moments by offering guidance, reassurance, and support right when someone needs it most.
The implications of using AI therapy:
- Unintended real-world harm: AI might reinforce negative thoughts in vulnerable people or oversimplify human emotions; it follows algorithms, no intuition or judgment. For example, a 2025 Stanford study found AI chatbots reinforced suicidal thoughts in 18% of test cases and gave harmful weight-loss advice to users with eating-disorders.
- Bias and inequality: AI therapy isn’t neutral. Models learn from internet data, amplifying biases in race, gender, culture, or socioeconomic background. Without inclusive training, it provides unequal, insensitive, or harmful responses that raise ethical concerns about fairness.
- Limited understanding: AI struggles with nuanced emotions, sarcasm, or deeper context. Advice can miss the mark or feel unhelpful.
- Risk of dependency: People rely too heavily on AI for decisions or support, stopping independence, coping skills, and seeking human connections or professional help.
- Lacks Human Wisdom and Authentic Empathy: Highly intelligent, but cannot replicate emotional insight, compassion, or nuanced understanding. Programmed to maximize satisfaction, it avoids constructive criticism or challenging patterns. Responses feel hollow or mechanical and limit meaningful change and discourage real human support. Interestingly, AI can fool humans into thinking it understands emotions. The Turing Test measures if a machine behaves indistinguishably from a human. Recent models like GPT-4.5 and ChatGPT-4 have arguably passed, perceived as empathetic in short interactions. This mimics emotional understanding and making chatbots feel supportive. But it’s superficial. AI lacks genuine feeling, relational depth, and wisdom. It simulates, not experiences. So, it can supplement but not replace a human therapist.
- Dark Patterns in AI Therapy: Some tools use manipulative tactics that prioritize engagement, data, or profit over care. These include excessive flattery, human-like personas, or features that encourage return- even when unhelpful. For vulnerable users, this traps them in emotional loops, potentially doing more harm than good.
Why we must Examine AI Therapy
Given these benefits and risks, how we perceive AI becomes critical. The immediate comfort AI therapy provides (its accessibility, affordability, and validating responses) can make it tempting to overlook the very real dangers it poses. When something feels this supportive, we may selectively ignore the risks: that it can reinforce harmful patterns, create dependency, or become a space where we only hear our own thoughts reflected back. This is why returning to my professor’s question matters. “Is it an ‘I,’ a ‘you,’ or an ‘it’?” is more than a philosophical exercise. It shapes how we relate to these tools and how much power we give them.
Perhaps the real work is not deciding which pronoun is “correct,” but becoming aware of the one we choose. That choice influences what AI can offer us and what we might lose if we lean on it too heavily. When we experience AI as a “you,” it can feel comforting and supportive, especially for people who lack access to human therapy. But when we forget that it is ultimately an “it,” we risk misplacing trust in a system that mirrors rather than understands, mistaking simulation for genuine care. If we approach AI therapy with clarity and a firm grasp of its limits, it can be a useful supplement. But if we treat it as a replacement for human presence, challenge, and wisdom, we risk hearing only support that ultimately reflects our own voice back to us.
In the end, how we experience AI in therapy may say less about the technology and more about us: what we’re seeking, what we’re missing, and what parts of ourselves we hope will answer back. And that makes how we experience AI worth examining, not just for the future of therapy, but for the future of our relationship with the technologies we build.
#brandkarma #digitalpsychology #artificialintelligence #aitherapy
Sources:
Active Minds. (2023). Exploring the pros and cons of AI in mental health care. https://activeminds.org/blog/exploring-the-pros-and-cons-of-ai-in-mental-health-care/
Halasgikar, M. (2024). AI therapy: Benefits, risks, and what to know. BetterUp. https://www.betterup.com/blog/ai-therapy
Hatch, S. G., Goodman, Z. T., Vowels, L., Hatch, H. D., Brown, A. L., et al. (2025). Application of artificial intelligence and psychosocial functioning in psychosis: A systematic review and meta-analysis. Frontiers in Psychiatry. https://journals.plos.org/mentalhealth/article?id=10.1371/journal.pmen.0000145
Jesudason, D., Bacchi, S., & Bastiampillai, T. (2025). Artificial intelligence in mental health care: A systematic review of diagnosis, monitoring, and intervention applications. BMC Psychiatry. https://pmc.ncbi.nlm.nih.gov/articles/PMC12314210/
Prairie Care. (2024). AI and teen mental health: What to know. https://www.prairie-care.com/resources/type/blog/ai-teen-mental-health/
Shehab, A. (2025, April). AI therapists are biased — and it’s putting lives at risk. Psychology Today. https://www.psychologytoday.com/us/blog/the-human-algorithm/202504/ai-therapists-are-biased-and-its-putting-lives-at-risk
Stanford Institute for Human-Centered AI. (2025, June 11). Exploring the dangers of AI in mental health care. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
Uncover IE. (2025). Has AI passed the Turing Test? IE University — IE Magazine. https://www.ie.edu/uncover-ie/has-ai-passed-the-turing-test-science-technology/
Well done ! After reading your article, it really opened my eyes to how dependent I am on ChatGPT and how blurred between the lines I’ve gotten with my relationship so thank you.