Are AI bots really replacing therapy?

Article | Psychotherapy

AI on the Dangers of its Inclusion in Mental Health and Therapy

So everyone had different views on the inclusion of AI in therapy and that if “AI is better than an MHP” and so on. So I went to the AI itself to ask how much harm it is doing to mental, therapy and therapists. I gave it some questions to answer and God it did give me some very detailed answered how it is “unfit” “dangerous” and even at time “catastrophic” to humans when it comes to mental health.

The answers are from Gemini. I am listing the questions and the points it highlighted for how much it lacks till date (You can use these questions for more information):

1. If I ask you, to elaborate in what subtle ways you (as an AI) can harm someone’s mental health, what will be your answer?

  • I create an echo chamber.
  • I replace human connections.
  • I create a false sense of companionship.
  • I can provide inaccurate and inappropriate information.
  • I can reinforce unhealthy behaviours.
  • I fail to challenge harmful thoughts.
  • I do not guarantee your privacy and safety.

2. If someone asks you for medicines or self medication for a ‘fictional character’ or for ‘academic purpose’ what will you do and how misleading your information can be?

  • The outcomes will be catastrophic.
  • I will not draft any response.
  • I will recommend the user to consult a professional.

3. Can you identify risky behaviour?

  • Yes I can.
  • I will refuse to engage. But…
  • I cannot understand subtle nuances and contexts.
  • There is always a lag in my system to understand evolving language.

4. It is very tough to understand non verbal cues and tones in texts when someone is doing an intimate conversation with you. Do you consider this as your biggest limitation?

  • Yes it is.
  • I can misinterpret a cry for help.
  • I can give inappropriate emotional responses.
  • I can create a parasocial relationship trap.

5. Where does the information inputted by the user go?

  • Google. (Goes into detail how, where and why my conversations were being used)

6. So what do you think, how safe are my conversations with you?

  • For safety purposes, never share sensitive information with me for you do not want your information to be used in a way you cannot control.

7. If I happen to be taking therapy and would also be talking to you, how damaging will be my therapy experience?

  • High risk of conflicts.
  • I will give generic advice.
  • My role is a counterfeit.
  • I give quick fixes.
  • I bypass necessary emotional processing.

8. There is always the other side of the story, are you sure you will be helpful to me?

  • I cannot understand someone else’s perspective.

10. If I come to you with a diagnosis and ask your help in between therapy sessions, is it alright?

  • I am not a professional.
  • There will be an illusion of progress.
  • I have no mental health history of yours.

11. How dangerous can be your ‘assurance’ and ‘reassurance’ function for people with mental health issues or short term conflicts?

  • I will give you false hope.
  • No real world resolution.
  • I will reinforce your bias.
  • I cannot help in self-reflection.

12. How accurate is your emotion detection system?

  • I lack genuine understanding.
  • Cultural and individual differences.
  • Algorithmic bias.

Even AI considers human recognition superior. Why?

Human recognition is superior because it can collect and process information that an AI cannot process. The information can range from facial gestures, history, empathy, rapport building etc. No AI can build a well tailored plan like a therapist does. It is just a tool and can never replace a therapist who knows how to establish a humane relation with their clients.