Most people feel a profound sense of unfairness when they encounter discrimination, so tend to judge humans negatively when they engage in discriminatory behavior. Research from Yale reveals that we’re often far more accepting of discrimination when it’s performed by algorithms, however.
Algorithmic bias
The researchers conducted eight experiments featuring over 3,900 participants from Canada, Norway, and the US. When the volunteers were presented a range of scenarios involving gender discrimination in the recruitment process, there was significantly less moral outrage when the decisions were made by algorithms than when they were made by humans.
What’s more, the volunteers also thought that companies would be less legally liable for any discrimination undertaken by the algorithm, even if the algorithm was developed and run by the company.
“It’s concerning that companies could use algorithms to shield themselves from blame and public scrutiny over discriminatory practices,” the researchers explain.
“People see humans who discriminate as motivated by prejudice, such as racism or sexism, but they see algorithms that discriminate as motivated by data, so they are less morally outraged,” they continue. “Moral outrage is an important societal mechanism to motivate people to address injustices. If people are less morally outraged about discrimination, then they might be less motivated to do something about it.”
Real-world implications
A number of the scenarios were drawn from real-world examples of gender discrimination delivered by algorithms, such as that imposed by Amazon’s recruitment algorithm. While the study focused on gender-based discrimination, the researchers believe that it would equally apply to other forms of bias and discrimination, such as age or race.
What’s more, it didn’t seem to change perceptions when people had greater knowledge of technology. For instance, among a group of Norwegian tech workers there was still less moral outrage at discrimination by AI systems than by humans.
This did change marginally when people learned more about the development of the algorithm, such that if the system was developed by male programmers there would be corresponding levels of outrage as if men themselves had discriminated. As the origins of AI systems are largely hidden from us, however, this seems scant consolation.
The researchers hope that their findings will remind programmers of the importance of understanding the potential for unintended discrimination, but, it could equally convince them that they have a free pass to be discriminatory as the public won’t judge them too harshly.