New Accenture Tool Aims To Remove Algorithmic Bias

The risk of AI systems hard-coding in the biases of their developers is one of the biggest challenges facing AI today.  The size of the challenge was highlighted by recent work from MIT and Stanford University, which found that three commercially available facial-analysis programs display considerable biases against both gender and skin-types.

Consulting firm Accenture have developed a new tool that they claim will identify algorithmic biases to help companies make fair and ethical use of AI.  The tool will check the data that feeds any AI-based tool to determine whether sensitive variables have an impact upon other variables.  For instance, gender is usually correlated with profession, so even if a company removes gender from the data set, it can still produced biased results if profession is part of the data set.

The tool will then test for any algorithmic biases in terms of false positives and false negatives.  Based upon these results, it will then adjust the model so that impact is equalized and people are treated fairly.  In other words, it aims to do more than simply highlight a problem, it also aims to fix it for you.  Whilst doing this, it also aims to calculate the trade-offs in performance as a result of the increased fairness, all in a visual way that aims to aid decision making, even among non-technical audiences.

“AI is already making lifechanging decisions, from medical diagnoses, parole judgements, and even matchmaking. But by now we know that these decisions aren’t always right. The results they generate can be skewed by biased data, leading to discrimination against racial background, age, gender, dialect, income, residence and more. There is a risk that AI, despite being deployed with the best intentions, could aggravate prejudices in both business and society, which so many have worked to tear down,” Accenture say.

“As the stakes get higher, everyone in the world of AI is acutely aware of the need to reach the highest standards of ethics possible. But there are other compelling pressures too: businesses know they need to be fastest to market. The AI Fairness Tool aims to reconcile this through an applied, interdisciplinary and innovation-friendly approach. Balancing both rapid innovation and agile ethics, it will help AI systems treat all people in a fair and unbiased way.”

The fairness of AI

It’s part of a wider body of work that’s aiming to make sure that AI is developed with fairness front and center.  Earlier this year the House of Lords Select Committee on Artificial Intelligence released a report exploring the ethical development of artificial intelligence in the future.  The Committee developed five principles around which they urge the development of AI to revolve:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The authors of that report, the Science and Technology Committee, returned with a second paper exploring the ethics of algorithmic decision making.

The report, whose launch coincided with the roll out of GDPR, chimes with the earlier House of Lords report.  Both highlight the tremendous potential of AI, but also the need to ensure development is pursued in an ethical way.

The authors call for a ‘Centre for Data Ethics & Innovation’ to be established by the government to ensure algorithms are transparent and free from bias, whilst allowing individuals to challenge decisions that affect them and seek redress where appropriate.

The report also proposes greater government oversight of instances whereby companies make use of public datasets in their algorithms, and indeed how those datasets could be monetized.

So it’s pleasing to see Accenture develop this kind of tool.  Suffice to say, the value will come with the accuracy of the tool, and the company are currently testing it to ensure its reliability.  They have already prototyped it with a credit decision making tool however.

Should those tests prove effective, it is likely to be integrated into the Accenture Insights Platform, although that decision has still to be finalized.