The rise of AI in medicine

diabetic-retinaThe last few years have seen a tremendous rise in algorithm based healthcare services that aim to provide automated support and advice to users regarding their health.

For instance, I wrote recently about a Californian company that is requesting mammogram images from women with breast cancer to allow them to train their algorithms to accurately spot breast cancer.

Another Californian company, the California Healthcare Foundation (CHCF), are applying a similar technique towards solving diabetic retinopathy, which is a complication in diabetics caused by damage to blood vessels that supply the retina.

If it’s untreated it can lead to blindness, and is common in up to 80% of diabetics.  This damage can usually be treated by laser, drugs or surgery, providing it is caught early on.  The problem is, there aren’t really many symptoms in the early stages and so the best bet is for patients to have frequent check-ups.

Given the huge number of diabetes patients however, and the strain on doctors schedules, this seldom happens.  An automated approach, therefore, seems ideally suited to this scenario.

Automated decision support

The team setup a challenge on Kaggle, the challenge website for data scientists, which saw thousands of retina images uploaded and a $100,000 bounty offered to the people who could best make sense of them.

The participants in the challenge used the uploaded images to train algorithms to successfully identify the subtle signs of disease.  The winning entrant, from Warwick University, managed to devise an algorithm that could successfully identify the disease (as verified by a doctor) 85% of the time.

So it was providing comparable accuracy to a doctor, but without the very human limitations of a doctor in terms of their cost and availability.  For instance, they can be installed in a facility as quickly as it takes to copy the system over.  No training is required.  What’s more, they also don’t require images sent back and forth in order to get images and results to the right people.

An interesting part of the challenge was a requirement that any winning solution be open source, and the plan is to roll it out throughout California, but there remain certain regulatory issues to overcome before that occurs.

A slightly more pressing challenge to overcome is one of perception.  Recent research suggests that despite the often superior decisions that such systems can give us, we are often more likely to accept inferior decisions if they’re given by people rather than trust the machines.

The study, led by Wharton’s Cade Massey, was instigated after Massey had experienced first hand the objections to accepting machine based intelligence within organizations.

“We know that people don’t like [those data-driven answers], but we didn’t really understand why. And we sure didn’t have any insight into what we could do to make people more open to them,” he says.

Experiments revealed that we are generally much more forgiving of an erroneous decision if it’s made by a human.  If a machine makes one mistake then that is usually enough for us to discard what it can offer for good.

So whilst there may be very tangible benefits to be had from utilizing automated decision making, I suspect it will be a little way off yet before it becomes deployed in the mainstream.

Related

Facebooktwitterredditpinterestlinkedinmail

5 thoughts on “The rise of AI in medicine

  1. I think that perception thing will be key. It seems much more likely that we'll reject AI for 'irrational' reasons rather than anything rational.

  2. Great article. Another point, perhaps missed, is that there is a chemical connection (oxytocin) that builds trust in human interactions, which, perhaps begs an interesting research question. I think a correction needs to be made: all authors appear to be from UPenn, Wharton. Cade Massey was not lead author.

    • You're quite right, I've corrected Cade's institution accordingly, thanks for pointing out my error. That's an interesting point about the chemistry of trust. There have been various studies suggesting robots/AI need to have some more human flaws in order to be regarded as our equal, but most explorations have looked at the psychology of things rather than the chemistry.

  3. Very good article, Adi! It is in line with an article I wrote recently, "Socially Conscious IBM" concerning IBM's use of Watson and advanced anaytics for finding cures for various types of cancer.

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...