Human decision-making is far from perfect, with biases causing considerable variation, discrimination, and unfairness. Artificial intelligence is increasingly being deployed to try and improve matters, but is it working?
A new report from Rotman investigates, and finds that while it does indeed have the potential to overcome some of our most damaging biases, it can also reinforce gender and racial inequality.
“AI has a ton of power to create outcomes that are very helpful to people,” the researchers say. “But, we can’t think of technology as separate from the issues going on in society.”
Bias in, bias out
The authors remind us that the responses we get from these AI-based systems are usually only as robust and reliable as the data that the algorithms are trained on. For instance, they highlight the medical school that rejected female candidates and those with non-European sounding names because the algorithm was trained on data where these people were not well represented.
Or you have the system that was designed to spot cancerous skin lesions that was far less likely to detect such cancers in dark-skinned people because the data the algorithm was trained on mostly contained light-skinned individuals.
Discrimination and bias can also emerge through the implementation of such systems. The report cites examples of law enforcement agencies using facial analysis software to predict criminality based on facial features.
Considering social equity
These problems could be overcome by ensuring that social equity considerations are factored into each AI project from the very start. This can be especially effective when diverse groups are included in the project teams themselves.
There is also a need for governments to move more quickly on policy and regulations that will help to ensure that the appropriate standards and levels of accountability are in place for these technologies. As AI becomes a more prominent part of our lives, there will be inevitable calls for it to be fairer than is currently the case.
“There are always ways in which governments and organizations creating and using AI can take a pause and say, ‘Hey, we shouldn’t do this,’” the researchers conclude. “It’s not too late. You can always make changes.”