There are dozens of cognitive biases that influence our thoughts and behaviors, and for a while there was a belief that automated systems are the only way to really get clear and balanced decisions. Of course, others would argue that such systems are far from bias free, and often hard-code in the biases of their developers.
A recent paper explored the gap between how we intend to use algorithms, and how we actually use them. The study examined their application in both digital journalism and criminal justice. It emerged that workers in both fields use remarkably similar strategies to deal with algorithms.
“Whereas managers and executives frequently emphasize how ‘data-driven,’ modern, and rational their organization is, the actual uses of algorithmic techniques in web newsrooms and criminal courts reveal a more complicated picture,” the authors say.
Bypassing the digital support
For instance, workers in both environments were given digital tools to support their work, whether its real-time analytics platforms for the journalists or predictive risk analysis tools in court rooms. In both however, workers would often opt-out from using the tools that were supposed to be there to help them.
For instance, in newsrooms, many journalists would deliberately avoid using the analytics platform, with many revealing that writing for pageviews often resulted in poorer quality stories.
A similar picture emerged in courtrooms, where algorithms were introduced to try and overcome apparent racial biases in the system, but they remain seldom used by judges due the perceived bias they themselves (might) have.
“I’d prefer to look at actual behaviors,” one judge told the researchers. “With these tools the output is only as good as the input. And the input is controversial.”
Central to the concerns both sets of workers have is the reliability of the algorithms they’re given. Many of the current generation are opaque and not at all transparent, therefore workers have little idea as to their workings.
Greater transparency
There are attempts to make algorithms capable of explaining their workings more effectively. For instance, earlier this year researchers developed an algorithm that is not only capable of performing its task, but also translates how it achieved it into reasonably understandable English via a documentation process that is performed at each stage of its work.
Suffice to say, it’s capability to do that is rather limited thus far, as it’s only capable of describing its work in recognizing human behavior in pictures.
The algorithm trains itself using two distinct data sets. The first is to help it figure out what’s going on in the photo, and the second is to help it answer how it did so. The first half of this task is fairly standard, with a series of labelled images fed the algorithm. The second half however is quite novel, in that it utilizes three questions alongside each image, with 10 possible answers to each of them. So it might ask, “Is the person cycling? No, because… the woman doesn’t have a bicycle.”
This then gives it a degree of context around how it came to identify what was in the picture. It’s something the research team refer to as a ‘pointing and justification’ system’ in the sense of being able to justify any data that you care to point at.
It’s fairly modest beginnings, but hints at a future with greater levels of transparency, which may in turn help to build the kind of trust in digital systems that seems to be lacking today.
“People who design these tools do not always follow closely how they are being used in specific organizations,” the authors conclude. “Sometimes people use technology in ways you want them, but sometimes they use it differently.”