New Report Urges Greater Algorithmic Accountability

The accountability of algorithms is something that I’ve touched on a number of times as a growing number of reports have emerged examining the topic.  For instance, at the start of the year, a new report from Omidyar Networks examined whether automated systems currently experience enough public scrutiny, either in terms of civil society or in terms of official laws and regulations.

“There is a growing desire to “open the black box” of complex algorithms and hold the institutions using them accountable. But across the globe, civil society faces a range of challenges as they pursue these goals,” the authors explain.

This was followed by a second paper from the AI Now Institute, which attempted to create an Algorithmic Impact Assessment (AIA) framework to support communities and stakeholders affected by AI decision systems.

“While AIAs will not be a panacea for the problems raised by automated decision systems, they are designed to be practical tools to inform the policy debate about the use of such systems and to provide communities with information that can help determine whether those systems are appropriate,” the authors conclude.

Algorithmic accountability

The best things clearly come in threes, as a third report has recently been published by the Center for Data Innovation.  The report advocates greater algorithmic accountability to ensure that society guards against harm and ensures that the laws governing human decisions also apply to AI-based decisions.

“Algorithms create a wide variety of social and economic benefits, and as they improve they will help solve newer and bigger challenges,” the authors say. “Policymakers should be careful not to jump the gun on regulation. Many of the proposals that have been put forward to regulate algorithms would do little to protect consumers but would be sure to stall innovation.”

Accountability of algorithms would ensure that humans can not only verify that the systems work as intended, but also that any harmful outcomes can be identified and rectified.  The authors advocate that policymakers adopt a harms-based approach to ensuring that individuals are protected.  This would hold operators to account through a relatively light-touch raft of regulation.

It would utilize a sliding scale of possible enforcement actions against any company proven to cause harm as a result of their algorithms.  Punishments would follow for intentional and harmful actions, whilst imposing little to no penalties for unintentional or harmless ones.

“Many people have proposed drastic new regulations for algorithms, and Europe has already started moving down this path,” they say. “The risk today is that policymakers may overregulate algorithms, and in the process, limit innovation with technologies like artificial intelligence. A better approach is for policymakers to continue to use light-touch regulation to ensure proper oversight while enabling development and adoption of new technology.”

We’ve seen and heard a lot about the talent arms race in the tech industry as companies swarm over AI talent, but the report also urges regulators and policymakers to enhance their own technical understanding so that they can at least go toe to toe with the changes in the marketplace.

Time will tell whether this strategy is adopted by the various relevant parties around the world, but it is at least pleasing to see this issue being taken more seriously.

Related

Facebooktwitterredditpinterestlinkedinmail