Using AI to accurately detect PTSD from your speech

I wrote recently about a number of fascinating projects that see AI utilized to analyze our speech for various things, from Alzheimer’s to empathy levels, with this approach being deployed in the marketplace in areas such as counseling and customer service.

It’s an approach that is increasingly common, with a team from New York University also using AI to try and detect signs of post-traumatic stress disorder.  Whilst one would imagine the symptoms of PTSD are fairly obvious, professionals actually have a tough time identifying people that suffer from it.

Stress detectors

Such an approach is especially useful for conditions such as PTSD, as not only are there no blood tests we can perform, sufferers are often deeply embarrassed about their feelings, and thus have difficulty talking about their mental health.

The researchers are gathering voice samples from a number of combat veterans to analyze them for pitch, cadence, rhythm and tone to see if they can uncover markers for things such as PTSD and depression.  These markers are then used to train algorithms that will be used to analyze the speech of veterans in real-time.

In total, 30 distinct vocal characteristics have been identified that are associated with PTSD.  When the system was first tested, it was capable of detecting PTSD with an accuracy of 77%, but the team are confident that this can be improved upon significantly, especially when they have access to more data to work with.

“Medical and psychiatric diagnosis will be more accurate when we have access to large amounts of biological and psychological data, including speech features,” they say.

The project is one of a number that are attempting to automatically derive medical insights from simple voice recordings.  It’s a feature that you could easily see rolled out on smartphones, or even newer products such as the Amazon Echo.  Indeed, the team behind the Echo left Amazon to form Canary Speech, the startup that are analyzing speech for conditions such as Alzheimer’s.

Growing interest

We are, of course at a very early stage in this process at the moment, but researchers have already identified the potential for this technique to be used in areas such as postpoartum depression detection and heart disease.

The growth in interest in the method will also see greater analysis of different accents and dialects, whilst also working to see whether the algorithms can spot when people are attempting to game the system.

There will also be the inevitable privacy and security issues involved in giving access to this kind of data to diagnostic companies.  After all, it is essentially recording your conversations, and whilst early researchers stress that it’s the way you talk, not what you talk about, that they’re interested in, it’s likely to require a bit of convincing to assure people that their data is in safe hands.

It is a fascinating area however and one that is sure to see significant activity in the coming years.  Certainly an area to watch with interest.

Related

Facebooktwitterredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...