The wisdom of crowds has relatively quickly become a heuristic to live by, with the theory being that the aggregation of multiple perspectives will deliver a better response than any one person individually.
It’s an argument that’s commonly used in support of democracy, but the theory does rest upon the quality, variety and impartiality of the information people consume when making their decisions. In the worst cases, an ignorant majority can easily outweigh a more knowledgeable minority, which results in the wrong response winning over the correct one.
The stupidity of crowds
To overcome this, a team from Princeton and MIT developed an algorithm that aims to provide more nuanced insight into the information we consume. It centers around asking people a question in two ways:
- What do you think the right answer is?
- How popular do you think each answer will be?
It turns out that the right answer is often much more popular than we think it will be.
The researchers asked participants a number of questions. For instance, some were asked whether Philadelphia was the capital of Pennsylvania. Alongside answering the question, they were asked how likely it was that others would answer ‘yes’. It turned out that most people not only got the answer wrong (the capital is Harrisburg), but they thought most other people would do likewise.
Among those who got the answer right however, they correctly predicted that many of their peers would get the answer wrong. In other words, the vast majority of people thought others would provide the wrong answer. Indeed, many more people thought that their peers would get the answer wrong than actually did so.
Surprisingly popular
The method, which the researchers dub surprisingly popular (SP), still clings to the vestige of democracy because there is no preconceptions about who will have specialized information. The only criteria is that such information exists.
“The SP method is elitist in the sense that it tries to identify those who have expert knowledge,” they say. “However, it is democratic in the sense that potentially anyone could be identified as an expert. The method does not look at anyone’s resume or academic degrees.”
When the algorithm was tested, it was found to reduce errors by 21.3% when compared to votes whereby the majority won, and 24.2% when compared to confidence-weighted votes.
“The argument in this paper, in a very rough sense, is that people who expect to be in the minority deserve some extra attention,” the researchers say. “In situations where there is enough information in the crowd to determine the correct answer to a question, that answer will be the one [that] most outperforms expectations.”
It promises to open up interesting ways of understanding how crowds might behave, which given the rather parlous state of polling in the past year or so, is no bad thing. It also promises to ensure that we engage the crowd in the right way to ensure they are wise rather than misinformed.