New Tech, Old Rules: AI Therapy Apps Outpace Regulators

Published 2 months ago6 minute read
Uche Emeka
Uche Emeka
New Tech, Old Rules: AI Therapy Apps Outpace Regulators

In an era witnessing a growing reliance on artificial intelligence for mental health advice, a void in robust federal regulation has prompted individual states to initiate their own laws governing AI “therapy” applications. These newly enacted state-level regulations, all passed within the current year, often fall short in comprehensively addressing the rapidly evolving landscape of AI software development. Consequently, a fragmented regulatory environment has emerged, which, according to app developers, policymakers, and mental health advocates, is insufficient to adequately safeguard users or hold creators of potentially harmful technology accountable. As Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick, noted, “The reality is millions of people are using these tools and they’re not going back.”

The approaches adopted by states vary significantly. Illinois and Nevada have implemented outright bans on the use of AI for mental health treatment. Utah has introduced specific limitations on therapy chatbots, mandating protections for user health information and clear disclosures that the chatbot is not human. Other states, including Pennsylvania, New Jersey, and California, are actively exploring methods to regulate AI therapy.

The impact of these disparate laws on users is diverse. Some AI therapy applications have opted to block access in states with bans, while others continue operations, awaiting further legal clarity. A significant oversight in many of these state laws is their failure to cover generic chatbots, such as ChatGPT. These widely available bots are not explicitly marketed for therapy but are extensively used for such purposes by an unquantified number of individuals. Disturbingly, these generic chatbots have been implicated in lawsuits stemming from severe instances where users reportedly experienced a loss of grip on reality or even took their own lives after interacting with them.

Experts, such as Vaile Wright, who oversees health care innovation at the American Psychological Association, acknowledge the potential utility of these applications in addressing critical needs. Wright points to a national shortage of mental health providers, the high costs associated with care, and uneven access for insured patients as factors driving the adoption of AI solutions. She suggests that mental health chatbots grounded in scientific principles, developed with expert input, and continuously monitored by humans, could transform the landscape by offering support before individuals reach a crisis point. However, she cautions that the tools currently available on the commercial market do not yet meet this ideal, underscoring the urgent need for comprehensive federal regulation and oversight.

Federal agencies have begun to take notice. Recently, the Federal Trade Commission announced inquiries into seven prominent AI chatbot companies—including the parent companies of Instagram, Facebook, Google, ChatGPT, Grok, Character.AI, and Snapchat—to examine their processes for measuring, testing, and monitoring potential negative impacts on children and teens. Additionally, the Food and Drug Administration is scheduled to convene an advisory committee to review generative AI-enabled mental health devices. Wright suggests that federal agencies could consider a range of restrictions, including limitations on marketing practices, prohibitions on addictive features, mandatory disclosures to users that AI is not a medical provider, requirements for companies to track and report suicidal ideation, and legal protections for individuals who report unethical company practices.

The diverse and often ambiguous nature of AI’s application in mental health care—ranging from “companion apps” to “AI therapists” to “mental wellness” apps—makes precise definition and legal classification challenging. This complexity has contributed to varied regulatory approaches. For instance, some state laws target companion apps designed solely for companionship, avoiding the broader scope of mental health care. Conversely, the laws in Illinois and Nevada directly ban products that claim to offer mental health treatment, imposing substantial fines.

The practical application of these laws can be ambiguous for developers. Earkick’s Stephan described Illinois’ law as “very muddy,” leading the company not to limit access there. She highlighted the shifting terminology around their chatbot, which was initially not called a therapist but embraced the term due to user reviews, only to later revert to “chatbot for self care” from “empathetic AI counselor” to avoid medical connotations. Stephan clarified that Earkick “nudge[s]” users to seek professional therapy if their mental health deteriorates and allows users to set up a “panic button” to contact a trusted loved one in a crisis, but it was not designed as a suicide prevention app, and police are not called in instances of self-harm reports. Stephan expressed appreciation for the critical examination of AI but voiced concern about states’ ability to keep pace with rapid innovation.

In contrast, other app developers reacted immediately to the new regulations. Upon downloading the AI therapy app Ash in Illinois, users are met with a message urging them to contact their legislators, arguing that “misguided legislation” has banned apps like Ash while leaving “unregulated chatbots it intended to regulate free to cause harm.” Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, affirmed that the ultimate goal of the legislation is to ensure that only licensed therapists provide therapy. He emphasized that “therapy is more than just word exchanges,” requiring empathy, clinical judgment, and ethical responsibility—qualities AI cannot currently replicate.

Despite these regulatory and ethical debates, research into AI’s potential for therapy continues. In March, a team based at Dartmouth University published the first known randomized clinical trial of a generative AI chatbot for mental health treatment, named Therabot. This chatbot was designed to treat individuals diagnosed with anxiety, depression, or eating disorders, trained on carefully crafted vignettes and transcripts to provide evidence-based responses. The study indicated that users rated Therabot similarly to a human therapist and showed significantly reduced symptoms after eight weeks compared to a control group. Crucially, every interaction with Therabot was monitored by a human, who intervened if responses were harmful or not evidence-based.

Nicholas Jacobson, the clinical psychologist leading this research, noted the early promise of these findings but stressed the need for larger studies to confirm Therabot’s efficacy for broader populations. He cautioned that the nascent nature of this field demands far greater prudence than is currently being exercised. Jacobson highlighted a critical distinction: many commercial AI apps prioritize engagement and are built to affirm users’ statements, contrasting with human therapists who ethically challenge thoughts. Therabot’s design aimed to avoid these ethical pitfalls. While Therabot remains in testing and is not widely available, Jacobson worries that strict bans might impede the progress of developers pursuing careful, evidence-based approaches, especially when traditional mental health systems are struggling to meet demand. Regulators and advocates, while open to amendments, maintain that current chatbots are not a solution to the mental health provider shortage. Kyle Hillman, who lobbied for the Illinois and Nevada bills, argued that offering a bot to individuals with serious mental health issues or suicidal thoughts, while acknowledging workforce shortages, represents a “privileged position.”

Loading...
Loading...
Loading...

You may also like...