As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability. The broad consensus to date is that five core things are required:
- Someone responsible for each instance of AI
- The ability to explain what is done, how it’s done and why it’s done
- Confidence in the accuracy of the system, and knowledge of where biases may exist
- The ability for third parties to probe and audit the algorithms
- AI that is developed with fairness in mind
It’s a topic discussed in a recent paper from Oxford University researchers. The authors argue for a holistic mindset to encourage the kind of new policies needed to manage technologies such as autonomous vehicles.
The paper provides three recommendations for policy makers looking to ensure our future relationship with AI and robotics is a safe and harmonious one.
- There is an assumption in the industry that making a system interpretable makes it less efficient. The authors believe this deserves to be challenged, and indeed I’ve written previously about some interesting research that does just that, and would allow such systems to be deployed at scale.
- Suffice to say, whilst explainability is increasingly feasible, it remains elusive for certain scenarios, and the authors believe that alternative options need to be developed for such situations. It isn’t good enough to brush them off as too difficult.
- Regulation should be structured so that similar systems are regulated in a similar way. We should work to identify parallels between AI systems so that context specific regulations can be established.
Accountable AI
One of the issues of making systems accountable is the computing power required to deliver that. There’s also a worry that by explaining the workings of a system, you give away the IP of that system. A second paper, from researchers at Harvard University, explores many of these issues.
They aim to provide an explanation in the sense of defining the reasons or justifications for the outcome derived at by a system, rather than the nuts and bolts of how it works. In other words, it was boiling things down to rules or heuristics. General principles if you like. Such a top level account would also reduce the risks of industrial secrets being unveiled.
The team have boiled the matter down to a simple cost/benefit analysis that allows them to determine the optimum time to reveal the workings of the system.
“We find that there are three conditions that characterize situations in which society considers a decision-maker is obligated—morally, socially, or legally—to provide an explanation,” they say.
They believe that the decision in question must impact someone other than the person making the decision. Only then will value be derived from questioning the workings of the system.
They also try to ensure that there is a strong and robust legal framework behind matters, as humans are prone to disagree on what is and is not morally justifiable, or even socially desirable. Laws tend to be harder and more codified however. There are also certain situations within which such explanations are required by law, including areas such as strict liability or discrimination.
This will have a crucial baring on the circumstances under which AI systems must explain themselves. This also allows for the explanation about a decision to be made separately from the inner workings of the system itself. This is a crucial step in the journey towards explainable AI.
“We recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are,” they say.
Suffice to say, they don’t believe this to be a final and definitive solution to this challenge, but it is nonetheless an interesting step along the way.