Recently the World Economic Forum pondered whether organizations should be hiring an AI Ethics Officer to ensure that the algorithms being developed made fair and ethical decisions.
“AI solutions could, for example, unintentionally generate discriminatory outcomes because the underlying data is skewed towards a particular population segment,” WEF writes. “This could deepen existing structural injustices, skew power balances further, threaten human rights and limit access to resources and information.”
While those things are undoubtedly important, they tend to overlook the fact that human decision-making is often just as afflicted by those same issues, and yet seldom do we hear calls for a “Chief Ethics Officer” to oversee our own decision-making processes.
The importance of noise
Indeed, the very fact that we are so frequently discussing the ethical development and deployment of artificial intelligence is a clear and distinct advantage that the technology holds over human decision-making.
In Noise, Daniel Kahneman, Cass Sunstein, and Olivier Sibony highlight the many ways in which human decision-making is inherently “noisy”. Noise can loosely be defined as what ensures that two judgments that should be identical are not. The authors highlight the numerous examples of this that have themselves received considerable publicity in recent years. For instance, the judges who provide different sentences depending on whether they have a full stomach, or the managers who provide different performance appraisals depending on the day of the week.
The authors highlight how noise can be caused by a number of factors. One of the most common is something they refer to as “objective ignorance”, which underlines the limitations we have on our knowledge. In other words, there are some things that are just not really possible to know. As the illusory superiority bias illustrates, we’re often pretty bad at grasping just how limited our knowledge is. Kahneman et al suggest that we create an internal signal that rewards us for spinning a narrative that makes sense of the unknown, which creates a scenario in which algorithms are often better judges than we are, and this fact greatly surprises experts.
“Undoubtedly, we need to draw attention to the costs of noiseless but biased algorithms, just as we need to consider the costs of noiseless but biased rules,” Kahneman et al write. “The key question is whether we can design algorithms that do better than real-world human judges on a combination of criteria that matter: accuracy and noise reduction, and non-discrimination and fairness.”
On that, I fear that we’re holding algorithms to a much higher standard than we hold our own human decision-making capabilities, which is likely to result in us losing out on many of the gains possible with AI today.