The AI That Can Rate Your Therapist

I’ve written numerous times in the past few years about a growing number of fascinating projects that are aiming to improve therapy via the use of new technologies, ranging from virtual reality to artificial intelligence.  The latest of these comes from Northeastern University, who have recently published a paper outlining a technology that they believe will allow us to easily rate the effectiveness of our therapist.

The idea for the technology was born out of the belief that a bad therapist can be worse than no therapist, and it’s often very difficult for patients to be able to tell a good one from a bad one.  Indeed, with feedback often minimal, it’s also difficult for therapists to gauge how effective they’re being too.

Facilitating feedback

The system records each therapy session, and then rates them, with a report card generated for each therapist.  It’s already been put through its paces in a university training clinic and a number of opioid addiction treatment facilities.  The team hope that it can eventually be scaled up and have a profound impact upon mental health care for patients around the world.

The team began by recording over 350 therapy sessions, which were then annotated to pull out 300,000 or so distinct statements that they compiled into a database.  The statements were then codified by a team of psychology experts so that each statement could be categorized.  The experts were particularly looking for alignment to the various techniques used in motivational interviewing.

This coded dataset was then used to train a machine learning algorithm to identify and analyze what was said during a therapy session, and rate each session accordingly.  Indeed, it’s even able to give specific feedback to the therapist.

Being judged by a machine

The researchers also studied how therapists reacted to the machine’s presence.  You might expect the reaction to be a hostile one, but in an admittedly small sample of 21 therapists, the reaction was overwhelmingly positive.

“We found that across the board, they all saw value in what we were doing,” the researchers say.  “Clinicians described the technology as accurate, insightful, and useful. They also thought the tool was particularly valuable for training, as a way of providing feedback to counselors as they were getting certified.”

At the heart of this was a high level of trust in the machine learning algorithm, with the researchers believing this is largely as a result of the positive coverage such algorithms have received in the press.

Suffice to say, the team are still working to improve their system, as whilst it is currently about 90% accurate when compared to a human expert, this isn’t good enough in a field where the stakes are very high.  As such, they encourage the therapists themselves to rate the feedback they get in order for the system to improve.

“As designers, we want to ensure that the predictions our models are making are contextualized, such that people can understand how these systems are working well enough to interpret findings and results they might be seeing, rather than just take them at face value,” they conclude.

Related

Facebooktwitterredditpinterestlinkedinmail