Why we might reject Watson

b1909f1c-68b9-4887-9c54-737b48474d2bThe last few years have seen a tremendous rise in the number of big data based analytical and decision support tools.  Indeed, it seems increasingly likely that systems such as Watson will be used to provide diagnostic advice rather than mere number crunching.

Alas, recent research suggests that despite the often superior decisions that such systems can give us, we are often more likely to accept inferior decisions if they’re given by people rather than trust the machines.

The study, led by Berkeley’s Cade Massey, was instigated after Massey had experienced first hand the objections to accepting machine based intelligence within organizations.

“We know that people don’t like [those data-driven answers], but we didn’t really understand why. And we sure didn’t have any insight into what we could do to make people more open to them,” he says.

Experiments revealed that we are generally much more forgiving of an erroneous decision if it’s made by a human.  If a machine makes one mistake then that is usually enough for us to discard what it can offer for good.

The authors believe that because algorithms are perceived as being both perfect, but also incapable of learning, once a mistake has been made once, people perceive it as likely to continue making those same mistakes.

Whilst that may have a degree of accuracy to it, the flipside is that we are lenient on humans because they will learn from their mistakes, which may be an optimistic assumption.

Of course, as previous studies have shown, the ability to accept errors is a vital part of the learning process, especially in the long-term.  It is, however, something that humans can often struggle with.

Interestingly, it seems that we’re more likely to succumb to this anti-machine bias when the decision making stakes are higher.

The challenge therefore is overcoming the lack of confidence in algorithmic decision making once it has made a mistake.

The study suggests that giving people a chance to input into the algorithm is one way around this.  They found that when users could adjust the number produced by the algorithm, it gave them more confidence in the outcome, whilst also hedging against the drop in confidence in the event of a poor result.

With the use of big data only likely to rise in the coming years, it’s a challenge that you sense will be tackled with increasing frequency.  The paper therefore provides us with an excellent start point on that journey.

The authors are optimistic that this will occur however, and highlight how reactive we are now to services such as the Amazon recommendation algorithm.  Just as that is now a natural part of the shopping process, they believe that other algorithmic support tools will become so too.

Of course, trusting a computer for things like medical diagnostics is perhaps a harder sell than trusting it to recommend a movie we might like, but we’ve seen trust in services like GPS rise in recent years, so it should not be ruled out.

Related

Facebooktwitterredditpinterestlinkedinmail

2 thoughts on “Why we might reject Watson

  1. The problem is more that we don't accept black box decisions with lack of interactivity. People use Google every day and don't have a problem with machine driven answers. That is because it is pretty transparent what google does and we can refine the search to give the result we want. If Watson had the same level of interactivity, we would be willing to accept results.

  2. It's only apparently transparent what Google does. Two different people searching for the same thing are highly likely to get different results on the first page – and not by chance.

    Google uses your various histories (search, browsing, time spent on pages hosted on Google properties etc.) to find out … "what's good for you". This is what Google serves to you. It's not what it thinks are the unbiased, clean, objective, most relevant results for your search.

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...