The rise in social technologies has enabled the ideation process to be outsourced to the crowd in ways never before possible. However, whilst soliciting input from thousands of people can seem a good thing, it can also make sifting through the suggestions incredibly challenging. That was the conclusion of a recent paper from researchers at Arizona State University.
“The very nature of crowdsourcing means that ideators can be overwhelmed by the number of ideas generated, rather than inspired by them,” they say. “There are several issues that need to be considered in systems that operate at this scale, such as the organization of the ideas, as well as the subsequent convergence on the best ones.”
Sorting the wheat from the chaff
To help overcome this challenge the team tested a hypothesis whereby participants were given a range of peripheral tasks, such as rating and combining others’ ideas to see what impact this had on the quality of ideation.
“Embedding peripheral micro-tasks within the ideation process may enable such systems to move from passive to active forms of inspiration and support, resulting in a stronger ideation session,” the authors say.
The researchers worked with Amazon’s Mechanical Turk, with participants divided into a control, an exposure group, and multiple task groups. Participants in each group had to perform the same task for which they were asked to contribute ideas. The exposure group however were given access to an ‘inspiration panel’, which gave them access to the ideas of others. Those in the task group were also given access to this panel, but were also asked to rate, compare or combine those ideas.
The participants were then rated according to the number of ideas they produced, as well as their breadth and depth. In this context, the breadth refers to the number of different concepts explored by the user, whereas the depth is the number of ideas within a concept that had been explored. What’s more, the researchers also measured the number of ideas each member looked at from the inspiration panel, as well as inspiration influence—a user’s average similarity between an idea and the most similar of its preceding inspirations.
Better ideas
When the results were analyzed, it emerged that those in the microtask group performed well relative to their peers in the exposure group in terms of the breadth of ideas they produced. It wasn’t quite as straightforward as that however, as a number of individual factors played a significant part.
For instance, the time of ideation was crucial, as was the productivity of the user. In terms of timing, it emerged that the best ideas emerged in the 2nd half of the session. This is likely to be when we run out of easy ideas and therefore gain more from the inspiration of others. What’s more, the more productive users were also found to be more likely to turn to others for inspiration, and subsequently use them effectively.
“Our research provides some support and guidance in explicitly embedding microtasks into ideation, which will not only be aiding ideators in their idea generation, but will also be generating information useful for converging on the best ideas,” the authors conclude.