As AI continues to develop, its role in the workplace expands ever outwards. Few sectors have seen this more vividly than the gig economy, where workers are routinely managed by algorithmic managers they can neither see nor engage with. Previous research has shown that workers are generally okay with this arrangement providing they perceive the technology to be fair, but when that ceases to be the case, the relationship can quickly break down as there are so few avenues available to question matters.
The lessons from the gig economy should be illustrative as algorithms play a growing role in monitoring and supervising workers in a growing range of industries and disciplines. A recent paper from Wharton dives deeper into the gig economy, and specifically how gig workers interact with their “AI bosses” in a bid to produce lessons for other parts of the economy.
Algorithmic management
The term algorithmic management refers to the process whereby an algorithm performs any of the jobs typically performed by a human manager. This could include evaluating and disciplining workers, assigning them projects, or even hiring or firing them. They can be extremely pervasive in the modern workplace, with the typical Uber driver having hundreds of such interactions across a typical shift.
While Uber drivers have been an oft-analyzed recipient of algorithmic management, however, they are far from alone in the modern economy. For instance, most checkout staff at the grocery store will now be assessed by an algorithm for things like scanning speed. Amazon warehouse workers are similarly under the algorithmic thumb. Indeed, whenever we’re asked to rate the service we received, you can bet that this information will be used by an algorithm in some way.
When Czech author Karel Capek first coined the word robot in his novel RUR, he borrowed the Czech word for “slave”. The use of algorithmic managers can often make it feel like this is far from the case today; however, modern workers feel like slaves to their algorithmic masters. In Inside the Invisible Cage: How Algorithms Control Workers, Hatim Rahman explores how workers at one of the gig economy work platforms feel about their lot.
Rahman argues that the algorithms that underpin these platforms form an “invisible cage” where high-skilled workers are controlled by opaque systems that are fundamental to their success but in which workers have little real understanding of how they work.
Escaping the cage
The Wharton researchers highlight how workers are attempting to break free from this invisible cage and regain a degree of autonomy in their work. They explain that we often have a series of small interactions with systems that can make us think we have choice and autonomy, even if much bigger issues are decided for us.
Others can attempt to deviate from the suggestions of their algorithmic overlords and even try to game the system in some way. For instance, the researchers explain how some drivers will turn the app off and on again or decline multiple rides simultaneously in a bid to trigger surge pricing. Despite the drivers doing these out of self-interest, they also help the ride-hailing company as workers are ultimately staying online longer.
Jobs, where algorithmic managers are common, are often those that might be referred to by academics as “bad jobs”, but the research suggests things are not as straightforward as that. Instead, the researchers believe gig work is often a “good bad job”, in that the work can be meaningful and attractive, while also suffering from structural problems. For instance, many gig drivers say they enjoy their work, despite the very real problems exposed by the likes of Alex Rosenblatt in Uberland.
Similarly, Amazon warehouses have become notorious for the extreme conditions workers operate under, but they also pay workers well. As Rahman explains, while algorithmic management is often associated with low-wage jobs, it’s also common in other gig-based knowledge jobs. Indeed, research from Imperial College London outlines how the kind of A/B testing common on websites is often deployed to cajole more out of knowledge workers, with companies monitoring keystrokes or the amount of time we’re logged in. Those in customer-facing roles are constantly monitored based on the ratings customers leave.
As the Wharton researchers explain, what begins among the most marginalized and vulnerable workers eventually makes its way across the rest of the economy. They urge companies not to cede all managerial oversight to machines and to ensure humans remain in the box seat. This can help to overcome many of the biases and flaws that remain inherent in these technologies, not least in terms of the discrimination that is all too evident due to the historical biases present in the data used to train these systems.
They also urge organizations, and indeed managers, to ensure that there are effective mechanisms in place for people to complain about algorithmic decisions. Previous studies have shown how important this is for people’s perceptions of the fairness of these systems. It seems unlikely that the genie will be put back in the bottle, but we can at least ensure that a process that should be among the most humane in the workplace remains so rather than being stripped of all that makes the best managers so effective.





