Recently I attended the Deep Learning in Healthcare Summit, where one of the highlights was a presentation by MIT’s Daniel McDuff about the progress his spin-out Affectiva has been making in using machine learning to allow medical diagnosis to be made by using images and videos taken from our smartphone.
They do some emotion based marketing type stuff, but as you can perhaps imagine, it was their healthcare products that received most attention at the event. Their system is based on a data repository of around 4 million faces that collectively total around 50 billion data points.
The company believe that by monitoring your face via video or image, they can detect things like your heart rate, stress levels and various other potential health issues.
It’s an approach that offers some significant improvements on the existing approach, both in terms of cost and effectiveness. Of course, it isn’t something that’s new to Affectiva.
Indeed, I wrote last year about an app, called LifeRhythm, that had been developed by researchers at the University of Connecticut. It is designed to detect symptoms of depression automatically via the numerous sensors that come inbuilt to most phones.
The app will tap into things like GPS, accelerometers and the like to gage the activity levels and social interactions of the user. This information will then be screened for the possibility of depression.
They are certainly on the right track though, as evidenced by a similar paper emerging from a team of researchers at the University of Rochester recently.
It uses selfie videos recorded by mental health patients and analyzes the footage for signs of depression, such as heart rate, blinking rate and head movement rate.
The system also monitors what the users were posting on social media, how quickly they scrolled and various other facets of their online meanderings.
It’s currently in demo testing, but it’s another fascinating development in this rapidly moving field.