A recent study by Peter Capelli and Martin Conyon set out to examine just how performance reviews are used by companies, and how useful they actually are. As it is believed to be the first such study, it’s a valuable insight into just how reviews are used.
Interestingly the study seems to confound a number of heuristics around the event. For instance, far from their rather dire image, they found that performance reviews do in fact provide valuable information to managers. Despite this however, they also reveal that they are done as much for relational as contractual purposes.
Inside the performance review
The study saw data collected from a large company over a ten year period. One of the central hypothesis they wanted to test was whether the performance review was really about performance at all. Is it a contractual process whereby we’re set certain goals and the review then assesses how well we’ve achieved them?
Well, in reality, it would appear not. In many ways this makes a lot of sense, as whilst you’re contracted to your employer, your relationship with your manager has no such legal bounds. It’s a much more fluid relationship, and this is typically reflected in the performance review.
The research was also hoping to examine the very essence of performance itself, and to especially test the notion that top performers are always top performers (and vice versa). With things like stack racking continuing to endure, it’s an important question to answer.
This notion that good people will always be good people would appear to make recruitment incredibly easy, but the reality is not really like that. Alas, things aren’t really like that.
By using a scale of 0-100, where 100 is a fixed perspective on talent whereby the best are the best all of the time (and the worst likewise), the study revealed that the actual figure is 27.
It’s perhaps easy to see why we fall into this easy standardization about people, with this fallacy broadly referred to as a fundamental attribution error. This suggests that when we see someone in a particular circumstance, we assume it’s thus because of who they are rather than their circumstances.
It’s something I’ve touched on previously, with studies suggesting that we regard people who succeed in easier circumstances more positively than those who perform moderately in much harder circumstances.
Another interesting aspect of the analysis was how our ratings change when we have the same boss versus a new one. The hypothesis was that with the same boss we get very comfortable and our ratings plateau, whereas a new boss will see things with a fresh (and more realistic) perspective.
The level of variance in ones scores is especially crucial in environments where stack ranking prevails, as if ratings are so dependent upon ones boss, it appears crazy to ditch those who rank badly, right?
Of course, just as it’s not particularly wise to draw firm conclusions about an employees talents based upon isolated performance, it is equally risky to draw too many conclusions from a study involving a solitary firm.
Nevertheless, the study should provide some food for thought as to how your own performance reviews function and allow you to conduct your own experiments within your organization.