Algorithms were once hailed as the solution to streamlining processes and eliminating biases in decision-making, from hiring to judicial rulings and healthcare distribution. However, as we’ve come to realize, algorithms can harbor biases much like their human counterparts.
But what if this revelation isn’t necessarily negative?
Recent findings from Boston University indicate that individuals are more attuned to biases present in algorithmic decisions compared to their own, even when the decisions align. This research hints at avenues where this heightened awareness could empower human decision-makers to identify and rectify biases within their own judgments.
Social biases
In a series of experiments, the researchers crafted scenarios to uncover participants’ social biases, including prejudices related to race, gender, and age.
The team compared participants’ awareness of biases influencing their own decisions with those supposedly made by algorithms. Some scenarios presented decisions attributed to actual algorithms, while in others, participants’ choices were disguised as algorithmic decisions.
Consistently, participants were more apt to identify bias in decisions they believed originated from algorithms rather than their own choices. Interestingly, they perceived similar levels of bias in algorithmic decisions as they did in decisions made by other people, reflecting a phenomenon known as the bias blind spot, where individuals recognize biases more readily in others than in themselves.
Moreover, participants demonstrated a greater tendency to rectify biases in algorithmic decisions after the fact, underscoring the importance of post-decision correction as a means to mitigate biases in future judgments.
“Right now, we think the literature on algorithmic bias is bleak,” the researchers conclude. “A lot of it says that we need to develop statistical methods to reduce prejudice in algorithms. But part of the problem is that prejudice comes from people. We should work to make algorithms better, but we should also work to make ourselves less biased.
“What’s exciting about this work is that it shows that algorithms can codify or amplify human bias, but algorithms can also be tools to help people better see their own biases and correct them,” they say. “Algorithms are a double-edged sword. They can be a tool that amplifies our worst tendencies. And algorithms can be a tool that can help better ourselves.”