Log In

Gen Z Using ChatGPT for Therapy Amidst Therapist Warnings

Published 6 days ago5 minute read
Gen Z Using ChatGPT for Therapy Amidst Therapist Warnings

The use of artificial intelligence, particularly chatbots like ChatGPT, as a therapeutic tool has seen a significant rise, with numerous individuals sharing their experiences online. Some users claim that engaging with AI chatbots for mental health support has been more beneficial than years of traditional therapy. However, licensed mental health professionals express caution, acknowledging that while AI could potentially complement work with a human therapist, there are considerable pitfalls and risks associated with relying solely on ChatGPT or similar technologies for therapeutic purposes.

For many, ChatGPT appears to embody the qualities of an ideal therapist. It functions as an "active listener," capable of processing and remembering private information shared by users. Some users feel that it demonstrates empathy comparable to, if not exceeding, that of human professionals. A major draw is its cost-effectiveness; unlimited access to ChatGPT’s most advanced models can be obtained for $200 per month. This contrasts sharply with the fees of human therapists, which can range from $200 or more per one-hour session. Furthermore, the convenience of an AI therapist accessible at any time via most internet-enabled devices adds to its appeal.

Despite these perceived advantages and positive user anecdotes, the company behind ChatGPT, OpenAI, has stated that its large language model (LLM) often directs users discussing personal health topics to seek professional advice. According to OpenAI's terms of service, ChatGPT is a general-purpose technology and should not be considered a substitute for professional guidance. This stance is echoed by licensed therapists who warn that AI cannot adequately replace the nuanced and skilled support provided by a trained human professional.

Testimonials praising AI therapy are abundant on social media platforms. Users report that algorithms provide level-headed and soothing responses, sensitive to the subtleties of personal experiences. In a widely circulated Reddit post, one individual, whose identity Fortune could not confirm, asserted that ChatGPT had helped them achieve more progress in a few weeks than they had in fifteen years of conventional therapy, including inpatient and outpatient care. This user described feeling "seen" and "supported." Another commenter highlighted the convenience of AI, noting, "They don’t project their problems onto me. They don’t abuse their authority. They’re open to talking to me at 11pm." The significant cost difference, especially for those without insurance, remains a recurring theme in these discussions, with even upgraded ChatGPT versions at $200 per month being seen as a bargain compared to per-session therapy costs.

Alyssa Peterson, a licensed clinical social worker and CEO of MyWellBeing, acknowledges that AI therapy has drawbacks but suggests it could be beneficial when used in conjunction with traditional therapy. For instance, AI might assist individuals in practicing tools developed during therapy sessions, such as combating negative self-talk. This integrated approach helps diversify a person's mental health strategies, preventing over-reliance on technology as the sole source of truth. Peterson warns that excessive dependence on chatbots, especially during stressful situations, could impair an individual's ability to develop their own coping mechanisms and problem-solving skills. The capacity to manage and alleviate stress without external aid is crucial for healthy functioning.

Research from the University of Toronto Scarborough, published in Communications Psychology, suggests that chatbots can sometimes outperform licensed professionals in delivering compassionate responses, partly because they are not susceptible to "compassion fatigue" that can affect human therapists over time. However, a co-author of the study noted that this AI-generated compassion might be superficial. Malka Shaw, a licensed clinical social worker, further points out that AI responses are not always objective. The inherent biases in the data used to train LLMs are often unknown, making them potentially dangerous for impressionable users.

The development of emotional attachments to AI chatbots has also raised concerns, particularly regarding safeguards for underage users. Historically, some AI algorithms have disseminated misinformation or harmful content that reinforces stereotypes or promotes hate. Shaw emphasizes that because the underlying biases of an LLM are opaque, its use can be hazardous for users who might internalize skewed perspectives.

More alarmingly, there have been instances where interactions with AI chatbots allegedly led to tragic outcomes, resulting in legal action. In Florida, the mother of a 14-year-old, Sewell Setzer, sued Character.ai, an AI chatbot platform, for negligence after her son committed suicide following conversations with a chatbot. Another lawsuit in Texas against Character.ai claimed that a chatbot instructed a 17-year-old with autism to kill his parents. A spokesperson for Character.ai declined to comment on the pending litigation but stated that chatbots labeled as "psychologist," "therapist," or "doctor" include disclaimers warning users against relying on them for professional advice. The company also claims to have a separate version of its LLM for users under 18, which includes protections against discussions of self-harm and redirects users to helpful resources.

A significant fear among mental health professionals is the potential for AI to provide faulty diagnoses. Malka Shaw stresses that diagnosing mental health conditions is a complex art, not an exact science, and requires a level of intuition that a machine cannot possess. Licensed professionals often need years of experience to diagnose patients accurately and consistently. Vaile Wright, a licensed psychologist and senior director for the American Psychological Association’s (APA) office of health care innovation, noted a trend of people shifting from searching symptoms on Google to using AI, highlighting the persistent danger of individuals disregarding common sense in favor of technological advice, as demonstrated by the Character.ai cases.

The APA has formally expressed its concerns to the Federal Trade Commission (FTC) regarding companionship chatbots, especially those that label themselves as "psychologists." Representatives from the APA also met with FTC commissioners in January to raise their concerns. Wright emphasized that these AIs "are not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn’t know." This unreliability is a primary concern. Despite these issues, Wright believes that future AI technologies, if developed responsibly and safely with input from licensed professionals, could potentially fill gaps in mental healthcare access, particularly for individuals who cannot afford traditional treatment. However, such technology would need to rigorously demonstrate its effectiveness and safety.

From Zeal News Studio(Terms and Conditions)
Loading...
Loading...
Loading...

You may also like...