How AI Can Remain Fair And Accountable

As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability.  The broad consensus to date is that five core things are required:

  1. Someone responsible for each instance of AI
  2. The ability to explain what is done, how it’s done and why it’s done
  3. Confidence in the accuracy of the system, and knowledge of where biases may exist
  4. The ability for third parties to probe and audit the algorithms
  5. AI that is developed with fairness in mind

It’s a topic discussed in a recent paper from Oxford University researchers.  The authors argue for a holistic mindset to encourage the kind of new policies needed to manage technologies such as autonomous vehicles.

The paper provides three recommendations for policy makers looking to ensure our future relationship with AI and robotics is a safe and harmonious one.

  1. There is an assumption in the industry that making a system interpretable makes it less efficient.  The authors believe this deserves to be challenged, and indeed I’ve written previously about some interesting research that does just that, and would allow such systems to be deployed at scale.
  2. Suffice to say, whilst explainability is increasingly feasible, it remains elusive for certain scenarios, and the authors believe that alternative options need to be developed for such situations.  It isn’t good enough to brush them off as too difficult.
  3. Regulation should be structured so that similar systems are regulated in a similar way.  We should work to identify parallels between AI systems so that context specific regulations can be established.

“The most important thing is to recognise the similarities between algorithms, AI and robotics. Transparency, privacy, fairness and accountability is essential for all algorithmic technologies. We need to address these challenges together to design save systems,” the authors say.

In all the hubbub around AI and its tremendous capabilities, the ability for it to explain what it’s doing is crucial.  We’ve seen calls for better data governance, but much less around the accountability that AI will require if we’re to have confidence in it.

“AI can be an immense force for good, but we need to ensure that its risks are prevented or minimised. To do this, it is not enough to react to problems. A permanent crisis approach will not be successful. We need to develop some robust ethical foresight analysis not only to see “which grain will grow and which will not” but above all to decide which grains we should sow in the first place,” the authors continue.

Related

Facebooktwitterredditpinterestlinkedinmail