As AI continues to develop at a rapid pace, a growing number of conversations turn to how we can ensure it retains its accountability. The broad consensus to date is that five core things are required:
- Someone responsible for each instance of AI
- The ability to explain what is done, how it’s done and why it’s done
- Confidence in the accuracy of the system, and knowledge of where biases may exist
- The ability for third parties to probe and audit the algorithms
- AI that is developed with fairness in mind
This conversation has become increasingly pressing as AI has taken on increasingly important decisions, whether it’s diagnosing disease or predicting recidivism rates. A new report from Omidyar Networks examines whether automated systems currently experience enough public scrutiny, either in terms of civil society or in terms of official laws and regulations.
Opening the black box
“There is a growing desire to “open the black box” of complex algorithms and hold the institutions using them accountable. But across the globe, civil society faces a range of challenges as they pursue these goals,” the authors explain.
This is often rather easier said than done however. The report found that most automated decision making tools today are a combination of human judgement, conventional software and statistics. It’s the non-technical aspects of these systems that are often the most important however.
Scrutiny need not be excessively sophisticated however, and indeed many interesting case studies identified in the report were fairly basic. Despite this, maybe even because of this, they led to productive public attention. The team do however say that more sophisticated methods are beginning to bear fruit. Many of the more technical methods are at a theoretical stage of their development however, so remain some way off being practically deployed.
They suggest that whilst many existing laws and regulations are relevant to autonomous systems, the application of these laws and regulations is still very much untested.
“Some laws have been recently updated to specifically address automated decisions, but they remain largely untested. Others may require updating to remain effective in the era of widespread automation,” the authors say.
The path forward
They believe that things can be improved via greater investment in what they refer to as ‘exploratory scrutiny’. This can include things such as input from journalists or advocacy organizations to engage a wider audience in the issues surrounding this topic.
“To engage a wider audience in debates about how automated systems should function, the field needs more work to find evidence about and clearly explain how important systems work in practice,” they explain.
This stage is crucial if we are to construct new policies and technical requirements. It will also help to underpin evaluation of whether the existing laws around information and data effectively cover today’s automated systems. It’s hard for the public to hold systems to account if they don’t know they exist, what purpose they’ve been designed to achieve, and the data they use to arrive at their decisions.
This is a crucial first step upon which more advanced techniques for scrutiny can be built. It will be interesting to see whether it’s a step that is ultimately made. This issue is undoubtedly crucial and underpins many of the ‘softer’ aspects that research suggests will ultimately decide the pace of adoption of new technologies. As such this is an interesting addition to the debate.