Why Stack Ranking Doesn’t Stack Up

stack-rankingThe concept of stack ranking first gained widespread appeal when it was widely mentioned in Winning, the autobiography of rambunctious former GE chief executive Jack Welch.  The approach was an extreme form of employee differentiation that saw the top 20% of performers promoted, the middle 70% coached and supported, whilst the bottom 10% were generally let go.

The fame surrounding Welch at the time ensured the method was widely adopted by organisations wishing to mimic the success achieved at GE during his tenure.  That Welch based the process at GE on such values as candor, coaching and transparency passed most other organisations by, and as a consequence, the results were often horrendous.

Anyway, this blog isn’t designed to poke holes in stack ranking, but to look instead at the process of measurement, for measurement lies at the heart of stack ranking.

The whole concept, and indeed that of performance reviews in general, rests upon the accuracy of measuring performance.  New research by Samuel A. Swift and Don A. Moore, University of California at Berkeley; Zachariah S. Sharek, Carnegie Mellon University; and Francesca Gino, Harvard Business School has highlighted the difficulties involved in accurate measuring of performance.

The paper looks in particular at how difficult we find it to distinguish between strong results achieved under easy conditions, and strong results achieved under challenging conditions.

“Across all our studies, the results suggest that experts take high performance as evidence of high ability and do not sufficiently discount it by the ease with which that performance was achieved,” the paper reports.

The study revealed that people regularly rated people higher when they had achieved results under easy conditions than colleagues that had done less well at a much tougher task.

What’s more, this logical failing continued, even when the judges were made aware of the subjectivity of the performances.  This was even the case when the judges were highly skilled in their role.

“We thought that experts might not be as likely to engage in this type of error, and we also thought that in situations where we were very, very clear about [varying external circumstances], that there would be less susceptibility to the bias,” Gino says.

So, if even the best experts are so bad at judging performance, would you really trust them with something as radical as stack ranking?

Originally posted at Work.com

Related

Facebooktwitterredditpinterestlinkedinmail

2 thoughts on “Why Stack Ranking Doesn’t Stack Up

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...