How To Get The Most Out Of Crowdsourcing

Over the years I’ve written numerous times about not only the growth in crowdsourcing and other methods of open innovation, but the various studies seeking to understand how we can do so effectively.  It probably reached peak popularity a few years ago, when numerous studies emerged looking at everything from Innocentive style challenges to GalaxyZoo style citizen science projects.

The academic world have not been resting on their laurels however, and a recent paper from the University of Southern California explores how you can make the best of any crowdsourcing project you embark upon.

Crowdsourcing has very different risks depending on whether you’re hosting the challenge or participating.  For the hosts, you only pay out when you pick the winner, so it’s pretty risk free, but for participants, you risk putting a lot of effort in with no reward at the end.

The study explores how to manage this trade off to ensure you get the right mix of quality and quantity in your competition.  You basically strive to reduce the uncertainties as much as you can, whether by guaranteeing the rewards or providing frequent feedback and engagement to participants.

Reducing uncertainties

These uncertainties tend to fall into one of three categories:

  • Ill-defined criteria for success – many innovation challenges are by their very nature uncertain, as the sponsors don’t have a clear idea of what a winning solution will look like (hence why they’re doing it in the first place).
  • The chance of bad actors – the vast majority of those who setup a challenge are responsible, but the ease with which one can be setup is bound to attract people who simply want to take advantage of free labor.
  • The uncertainties of competition – by its very nature, competitions are uncertain, especially virtual ones where you don’t know who your competitors are.

So what can you do to reduce these uncertainties?  Based upon data from a real-life crowdsourcing platform, the researchers came up with a number of key strategies.  The first is to guarantee the prize by pre-paying the promised award amount in full.  This strategy was found to dramatically increase participation and removes uncertainty around whether the sponsor will pay.

If that’s not something you’re happy with, the next best strategy is to ensure you engage regularly with participants.  It’s normal for contests to last for a prolonged period of time, with some lasting several months.  It’s important, especially for longer contests, to engage with participants and provide them with feedback on their submissions.  What might seem insignificant feedback has actually proven to be hugely reassuring, even if only to convince the participant that the sponsor is an active participant in the challenge.  This also makes the participants more assured that the challenge is above board, and that payments will be made.

The way you participate was also found to be important, with contestants looking for clues as to the competition they face from your feedback.  If people believe a deadset winner is already out there, then it’s very demotivating.  As such, it’s not a good idea to give really high ratings to participants, especially in the early stages, although when the prize was guaranteed, the demotivating effect of high ratings was diminished.

Alternatively, you also don’t want to rate solutions too poorly.  If you rate too many entries as unacceptable, this has a negative effect on participation as it damages the relationship with participants as a whole.  It raises a fear that no solutions will ultimately be selected and so time is being wasted.

It perhaps goes without saying that crowdsourcing contests are not guaranteed to succeed, and there are strategies you can deploy to enhance your prospects.  This paper provides an interesting new angle to explore, and will hopefully give you some useful tips for your next crowdsourced challenge.

Related

Facebooktwitterredditpinterestlinkedinmail