I’ve written a bit recently about the culture of collaboration (or giving if you want to be precise), and how challenging it is to create such a culture (but how easy it is to destroy it). When it comes to understanding why people collaborate or share with others, fairness plays a big part. There has been a long tradition in game theory to explore the role our perception of fair play plays in the way we interact with others, and it’s certainly true in the workplace. For instance, I wrote yesterday about the crucial role fairness plays in employee engagement.
A new study published recently delves further into the role fairness plays in the workplace. It wanted to get to the root of why we help other people, given that doing so often has no direct benefit for us, and often costs us in terms of time and energy. It suggests that far from having altruistic roots, there may be something darker behind our willingness to help others.
The ultimatum game, popularly used in game theory experiments, was chosen as the focal point of the study. The ultimatum game has a simple premise. One player has some cash (or other resource), and offers to split that with the second player, who can then either accept or reject the offer. A rejected offer means neither player gets to keep the bounty. Previous studies into this suggests fairness plays a huge role, with players rejecting any offer they perceive as unfair, even though earning even a small amount is more beneficial to them than the nothing they entered the game with.
The research decided to test this using a mathematical model, and an interesting trend emerged. They found that the opposite was occurring to those previous experiments. In the simulations, the second player was behaving logically, therefore accepting any offer, therefore promoting the unfair behaviour of player 1, and in a sense flushing out fairness from the process.
This is the classical contrast between the so called rational, logical homoeconimus used in traditional economics and the more realistic type used in behavioural economics. To try and make the simulation more life like, the researchers added what they nicely call negative assortment, which in layman’s terms means someone being dissimilar to yourself.
So, for instance, if one player was lifelike and rejected any offer that wasn’t fair, whilst another was rational and accepted any offer, that represents a negative assortment. The introduction of this to the game brought with it an interesting bedfellow – spite.
It emerged that when players had the contrasting perspectives, a spiteful player would only make unfair offers, whilst rejecting unfair offers from other players. In the traditional version of the game, spiteful players don’t tend to do well, but when negative assortment was added to the simulation, the behaviour flourished.
Whilst it sounds bad, it turned out to actually promote fair play, as fairness was a good protection against spite.
Think of it this way. A “gamesman” is someone who only makes unfair offers to benefit himself but accepts whatever comes his way because he believes it’ll all wash out in the end. “Gamesmen become a target for spite because they’re making unfair offers,” the researchers said. The “spiters” will reject those offers, eventually killing off the gamesmen.
What’s more, the presence of the spiters actually benefits fair players, since they don’t ever make unfair offers, so don’t therefore risk being rejected by the spiter. Fairness therefore becomes the best strategy.
Now, of course, this is still just a math based simulation, but in a world where we’re increasingly prompted to be fair and social with our time and resources, this provides an interesting auxiliary reason to do so.