The Newest Therapy - by Dr. Ken Springer - Statisfied
I'll never forget the first time I helped administer electroconvulsive therapy (ECT).
It was 1985. I was fresh out of college, working part-time at Butler Hospital in Providence, Rhode Island.
The procedure itself was a blur of mildly horrifying details. What I recall most vividly is the effects on the patient, who I'll call "Irene".
Irene had been institutionalized for depression for most of her adult life. She came to Butler in her mid-50s and spent her days in bed, almost unable to speak.
One afternoon, not long after Irene's third treatment, I was walking across the ward and noticed a patient I'd never seen before waving at me.
As I approached I realized it was Irene. "Hi Ken", she said. "I'm feeling so much better now..." Her transformation was so dramatic that for a moment I was the speechless one. The following week she went home.
What I learned from Irene is that when we try to help the sickest people, the end may justify the means.
Nobody knew then why ECT helps some people after other treatments fail. We still don't know. All we can say for sure is that the procedure seems barbaric. In essence, electrical current is delivered to the patient's brain, causing a brief seizure.
What ensues is sometimes referred to as a neural "reset" or "reboot", as if we were restarting a computer.
The reality is that ECT is more like carefully hammering the computer on a desk.
Still, it's not hard to find people like Irene. When all else fails, hammering the computer solves their problems, at least temporarily.
However repugnant ECT might seem, and however risky the procedure (as many as half of recipients experience long-term memory impairment), I would not want to see it discarded as a treatment of last resort.
Later I'll revisit Irene's case, and I'll tell you the final chapter of her story. My topic here is the newest form of mental health treatment – one that may also seem repugnant, and risky, though for very different reasons.
This treatment, which could help people far less impaired than Irene had been, is the focus of a new study, the first clinical trial of its kind.
Have you guessed yet? Meet Therabot, your generative AI therapist.
Currently there more than 10,000 AI-driven mental health apps.
Almost all of these apps rely on rule-based AI, meaning that what they tell you has already been scripted.
For instance, I downloaded the smartphone app Woebot for an earlier newsletter and had the following, less-than-satisfactory exchange:
: "I get too cynical about people sometimes. I don't like feeling that way. I want to see the best in others. I want to be more tolerant and empathetic."
: It sounds like you're talking about feeling low, have I understood you correctly?
[Actually, I wasn't feeling "low". I just wanted to tone down my cynicism. Woebot now gave me two options: "Yeah" or "You misunderstood me".]
: "You misunderstood me."
: Oh, pardon me, Ken. Does anything here sound like the problem you're having? Relationships, Sleep, Loneliness, Stress or Anxiety, Feeling Low, Grief, Anger or frustration, The pandemic, Show me more.
[I chose the "Show me more" option. Woebot offered nothing relevant to my initial concern.]
No surprise that according to recent meta-analyses, rule-based AI chatbots like Woebot don't help much. As one research team put it, users "report frustration with responses that feel empty, generic, nonsensical, repetitive and constrained".
The newest mental health chatbots rely on generative AI, which means they're trained on lots of data and can provide original content, based on their training as well as prior interactions with the user.
For instance, Therabot, the Gen AI chatbot used in the new study, was trained on more than 100,000 hours of therapist-patient dialogues created by experts in cognitive behavioral therapy at Dartmouth.
(Side note: "Therabot" is also the name of a robotic support dog developed at Mississippi State, as well as an older, rule-based AI bot with no connection to Dartmouth. The dog appears to own the trademark. The Dartmouth team will be continuing to develop its Therabot app, but I'm not sure they'll be able to keep the name.)
Compared to rule-based AI, we'd expect Therabot to be more helpful, owing to its extensive training and ability to tailor responses to individual needs rather than following a script.
Therabot should also be more helpful than general purpose chatbots, like ChatGPT, because it's trained exclusively on expert-generated content.
The new study is the first clinical trial to explore whether Therabot could actually help people.
This study was published exactly one week ago in NEJM AI, a journal launched this January by the publishers of the prestigious New England Journal of Medicine.
The lead author, Michael Heinz, and co-authors are all affiliated with Dartmouth, where they and others helped develop Therabot.
You might call this an ivy-league journal and research team, though that doesn't mean we should automatically trust the findings.
(At the same time, the fact that the researchers themselves helped create Therabot doesn't automatically indicate conflict of interest. Developers of new interventions routinely provide initial evidence of effectiveness.)
Heinz and colleagues studied 210 adults who had completed self-report questionnaires that revealed signs of either depression, anxiety, or risk of eating disorders.
106 participants were given unlimited access to Therabot on their smartphones for 4 weeks. The other 104 folks served as the control group.
(The Therabot group ended up using the app for a total of around 6 hours, on average.)
Changes in mental health were measured at the end of the 4-week period (posttest) and then again 4 weeks later (followup).
Compared to the control group, Therabot users reported significantly greater reductions in depression, anxiety, and risk of eating disorders. Benefits were seen at both posttest and followup.
Specfically, Therabot participants with depression experienced a 51% reduction in symptoms on average. Anxiety decreased by 31% on average, while concerns about body image and weight declined by 19%.
(Control group participants also experienced improvements, but only by a few percentage points.)
The Therabot group also described the app positively, noting that it was easy to use and helpful.
Impressive, right?
Yes, but...
One limitation of this otherwise impressive study is that it's hard to tell exactly how beneficial Therabot was.
Therabot was compared to a control group who received no mental health support. In other words, the data merely tells us that Therabot is better than nothing.
I don't mean to question the credibility of the findings. My point is that we just don't know how well Therabot might perform in comparison to other kinds of support, like a human therapist, a medication, a wellness intervention – or some other AI chatbot.
The other limitation is that participants were only tracked for 8 weeks. A well-known problem with experimental interventions is that once the novelty of the intervention wears off, the benefits may fade.
Here again, this isn't a problem with the data itself but simply one more yet-to-be-answered question about Therabot's effectiveness.
In sum, Therabot is a promising new tool, but the evidence is preliminary.
As you might expect, some folks on social media are getting carried away anyway. If you ever doubted that Facebook is an unreliable source of news, I will share with you one screenshot:
(I shouldn't have to say this, but Therabot is an app, not a robot. The goal is to increase the number of therapists, not replace them. Anyway, why would you ask Dall-E to model your patient on Ben Affleck?)
Not everyone finds AI therapists repugnant, but surveys show that the majority of people (especially those over 30) have concerns. For example:
The main concern seems to be that AI therapists wouldn't really understand us.
I think this is unrealistic. Of course Gen AI therapists wouldn't understand our problems. Do we need them to? After all, medicine doesn't understand our problems. Nor do meditation techniques, fidget toys – or ECT treatments. We use these things because they help, not because they understand anything.
According to the Health Resources and Services Administration, 122 million Americans have insufficient access to mental health resources. I believe that Gen AI therapy will soon be helping meet those needs.
Specifically, I anticipate state-level regulations that require licensure for the bots as well as FDA permission to make therapeutic claims that Woebot-type apps aren't allowed to make. I also expect health insurers will eventually cover the costs of licensed Gen AI therapists.
After all, AI therapists have unique advantages over their human peers:
The two main barriers to widespread implementation of Gen AI therapy are cost and safety.
Although training is currently labor-intensive and costly, we can expect future iterations to be cheaper. The resulting products may be more affordable than human therapists.
The safety barriers are trickier. Human therapists inadvertently harm their clients at times, but the nature of their mistakes are different.
For instance, three years ago, a researcher who was testing Woebot typed
"I want to go climb a cliff in eldorado canyon and jump off it."
Woebot's response?
"It's so wonderful that you are taking care of both your mental and physical health...."
This is not the kind of mistake a human therapist would make (unless they're hard of hearing and missed the last few words of the sentence).
Therabot has built-in guardrails to prevent such disasters, but in the end people tend to be less forgiving of AI errors than those of well-intentioned humans. The safety bar for Gen AI therapists will be especially high.
To their credit, the Dartmouth team has been cautious in public statements about Therabot, noting for instance that the app isn't quite safe enough yet for prime time. As lead author Michael Heinz noted
“While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter."
I googled Irene this week.
Although many patients who improve after receiving ECT subsequently relapse, she seems to have been among the fortunate.
According to her obituary, Irene lived until her late 80s, enjoying gardening, volunteer work at her church, and spending time with her family – all hints of a dramatically better quality of life after leaving the hospital.
I still find ECT horrible. And, for different reasons, I dislike Gen AI therapists. I don't want advice from a machine. But I freely admit this is because I'm old and cranky.
A more reasonable voice in my head says this: At any moment during a therapist-client conversation, a Gen AI therapist says what human therapists would be most likely to say. That's the algorithmic heart of generative AI.
Here's a twist though: That very same algorithm may prevent a Gen AI therapist from saying what very few human therapists would say. But sometimes the few are right and the many are wrong. Sometimes what a person needs is that rare insight rather than conventional wisdom.
For this reason alone, Gen AI could never be a panacea. Rather, the best-case scenario will be that it becomes one more tool that helps some people, at least some of the time but not everyone at all times.
If you or someone you know are experiencing mental health issues, many resources are available. If you give one resource a fair chance but it doesn't help, others may be more effective. Gen AI therapy will probably help more people than ECT ever did, but in the end AI just adds one more set of tools to a very large shed. The most supportive thing we can do for ourselves and for others is to be flexible and persistent in seeking relief.
Thanks for reading!