Last year I covered a project by researchers at Stanford and Carnegie Mellon to use citizen scientists to verify the results of academic research.
Participants played a relatively simple game that the designers hoped would prove the rigor of various studies being put under the microscope.
Using the crowd mkII
Whilst that project used game based approaches, a recent project undertaken by Harvard is using prediction markets to put research through its paces.
The prediction market was used to estimate the reproducibility of over 40 experiments that had gained prominence in psychology journals.
The crowd proved to be good at predicting such reproducibility, with a 71% success rate on the sample given to them.
“This research shows for the first time that prediction markets can help us estimate the likelihood of whether or not the results of a given experiment are true,” the authors say. “This could save institutions and companies time and millions of dollars in costly replication trials and help identify which experiments are a priority to re-test.”
The results are not quite so positive for the research industry itself, with a worrying 61% of studies failing to reproduce the original results.
A quick and effective way of testing
The authors believe, therefore, that tools such as prediction markets can prove invaluable in offering a quick and easy way of testing findings.
“Top psychology journals seem to focus on publishing surprising results rather than true results,” the authors say. “Surprising results do not always hold up under re-testing. There are different stages at which an hypothesis can be evaluated and given a probability that it is true. The prediction market helps us get at these probabilities.”
Of course, prediction markets are a fairly mature methodology these days, with successful applications in areas such as medicine, economics and even in testing science and technology predictions.
The experiment was part of the The Reproducibility Project: Psychology, which is an open science project designed to test the reproducibility of psychological research.
Participants were given $100 to invest in the market and given information about the studies, including the full paper, info about the authors and so on. With this information, they could then invest on the likelihood of reproducibility.
“One of the advantages of the market is that participants can pick the most attractive investment opportunities” the authors say. “If the price is wrong and I’m confident I have better information than anyone else, I have a strong incentive to correct the price so I can make more money. It’s all about who has the best information.”
The researchers believe that the initial results from the market were positive and offer the potential of scaling things up further to test the approach itself. The authors hope to conduct similar trials in other subject areas to see whether the success is equally applicable in other domains.
“Our research showed that there is some ´wisdom of the crowd´ among psychology researchers,” the authors conclude. “Prediction accuracy of 70 percent offers an opportunity for the research community to identify areas to focus reproducibility efforts to improve confidence and credibility of all findings.”