Can AI Self Police And Reduce Bias?

Concerns around the potential for AI-based systems to hard code the biases that blight human decision making has caused considerable consternation for researchers around the world.  It’s also prompted a number of attempts to overcome this challenge.  For instance, recently I wrote about a new tool developed by Accenture to try and identify biases within AI systems.

The tool will check the data that feeds any AI-based tool to determine whether sensitive variables have an impact upon other variables. For instance, gender is usually correlated with profession, so even if a company removes gender from the data set, it can still produced biased results if profession is part of the data set.

The tool will then test for any algorithmic biases in terms of false positives and false negatives. Based upon these results, it will then adjust the model so that impact is equalized and people are treated fairly. In other words, it aims to do more than simply highlight a problem, it also aims to fix it for you. Whilst doing this, it also aims to calculate the trade-offs in performance as a result of the increased fairness, all in a visual way that aims to aid decision making, even among non-technical audiences.

Impartial machines

There is also a project led by the Santa Fe Institute, which was documented in a recently published paper, which proposes an algorithm for imposing fairness constraints that prevent a system from showing bias.

“So say the credit card approval rate of black and white [customers] cannot differ more than 20 percent. With this kind of constraint, our algorithm can take that and give the best prediction of satisfying the constraint,” the researchers say. “If you want the difference of 20 percent, tell that to our machine, and our machine can satisfy that constraint.”

The team believe that their algorithm allows users to control the level of fairness that is required by the law in various contexts.  It’s worth remembering however that fairness comes as a tradeoff, as trying to behave fairly can mean a drop off in the predictive power of the algorithm.

Nonetheless, the team hope that their work will be adopted by companies to help them identify potential discrimination lurking in their own machine learning applications.

“Our hope is that it’s something that can be used so that machines can be prevented from discrimination whenever necessary,” they say.

Related

Facebooktwitterredditpinterestlinkedinmail