Last month I looked at the way the motivation of participants in citizen science projects change as they themselves become more established members of the community.
This is crucial because citizen science networks tend to follow the Pareto principle, with a small core of users tending to do the bulk of the work.
So how can you migrate users from the periphery into the core? That was the question posed by a recent study conducted by researchers at Cornell.
The paper wanted to test whether paying people to be more courageous with the tasks they undertook would work or not. Might, for instance, a sponsor benefit by skewing its funding towards more high risk tasks in the hope that participants will veer towards these rather than those on safer ground?
“You have a budget that you can afford, you want to use this budget to maximize the result,” the authors say. “We give a formula for what is achievable.”
The paper explored the topic through the unusual lens of a slot machine, in an experiment they called the multi-armed bandit problem.
This is a commonly used construct in computer science whereby you’re faced with a row of slot machines. You’re well aware that some of them pay out frequently, whereas you’re unfamiliar with some of the others. So they may pay out more, they may pay out less.
How much time should you spend experimenting with the unknown machines in favor of the machines you know?
The challenge has prompted no shortage of possible solutions to try and find the best strategy. The authors wanted the challenge to reflect a citizen science scenario however, so added in the fact that you’re not actually playing the machines yourself.
Instead, you’ve hired people to play on your behalf, with each player taking a percentage of the overall winnings. So, although you might follow the intellectually optimum strategy if you played yourself, others may well go for the safe option more often than not.
The researchers produce a formula that shows players how much they could win by experimenting more than they do at the moment. By using this formula, they can relatively easily determine how much they would need to pay each player to take a few more risks.
It emerged, for instance, that if the challenge or task is particularly hard, it works best to switch randomly between forcing agents, thus allowing them to selfishly choose on their own. The gains from one side therefore balance against the losses from the other.
They want to test the idea further, and propose a kind of net present value calculation for risky choices. So a novel idea or approach might not produce immediate returns, but in the longer-term it may well do, which therefore justifies the investment.