The Drive Towards An Ethical Approach To AI

As AI has slowly moved from pilots and research projects, there is a growing level of concern that the technology be used in an ethical way.  To date, most of the concern has come from think tanks, government agencies and bodies like Open AI.  A recent study from Accenture suggests that it’s a message that is getting through to executives however.

The paper reveals that some 70% of organizations who have adopted AI in some way provide ethics training for their technology teams, whilst 63% of them have an ethics committee in place to oversee their work.  This trend was found to be most common in the UK, where 80% of companies provided such training.

It’s also noticeable that those who are furthest ahead with AI deployment are also those doing most to ensure those deployments are ethical.  The findings emerged from an analysis of some 305 different businesses conducted by Accenture in partnership with SAS and Intel.

“Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Accenture say. “These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’”.

The importance of ethical oversight

It’s increasingly the case that leaders involved in deploying AI see ethical deployment as a deal breaker.  It simply cannot proceed without it, with around 74% of leaders saying they hold weekly reviews to ensure things are proceeding in the right way.

Despite this apparent progress however, the authors also fire a note of caution, as technology is still advancing faster than the oversight processes that govern it.

“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” the authors say. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”

Getting started with ethics

As ethical deployment of AI becomes more important, it’s perhaps not surprising that more and more advice is appearing on how best to do this.  Perhaps the most useful comes via a recent report from the Brookings Institute, which outlined six steps you can take to get started with ethical governance of AI.

  1. Hire official company ethicists – This should not be an addendum to someone’s official role, but their dedicated responsibility.  They should have a seat at the table for all AI-based discussions to ensure that ethics is taken seriously.  They can also help to ensure that ethics forms part of the culture that emerges around AI technologies.
  2. Have a code of ethics – The next step is to formalize a code of ethics that makes clear the principles, processes and ethical guidelines for AI development in the organization.
  3. Build an AI review board – With this board then tasked with evaluating all aspects of AI development and deployment.  The board should be integrated into the decision making framework of the company.
  4. Mandate AI audit trails – The explainability of AI is crucial if people are to have trust in the decisions and predictions AI-systems come up with.  This helps to provide a level of transparency that is likely to be required by external parties, especially if any legal proceedings result from your AI deployments.
  5. Implement AI training programs – These training programs have to include ethical, legal and societal factors as well as the obvious technical training.  This would help to ensure that software developers ensure that their work operates in accordance with the ethical framework outlined above.
  6. Establish a means of remediation for harm caused by AI – Hopefully with all of the above in place, your AI deployments won’t cause any harm, but should harm occur, it’s important that you have processes in place to compensate those who have been harmed, and for the situation to be rectified.

The Accenture data suggests that those who are furthest along in the development of AI, are also furthest along in the development of this kind of ethical framework.  The emphasis now has to be on the laggards to follow suit.

Related

Facebooktwitterredditpinterestlinkedinmail