Is AI Okay At Predicting Recidivism After All?

The use of AI in the criminal justice system has been beset by concerns that biases in the software encourage discriminatory outcomes.  It’s a suggestion that new research from Stanford and the University of California believes is unfair.

The study reveals that whereas in straightforward circumstances with just a few variables, even untrained humans can perform as well as the technology, but real life is seldom like that, and is often far more complex.  In these examples, the software was able to achieve 90% accuracy versus just 60% for the humans.

“Risk assessment has long been a part of decision-making in the criminal justice system,” the researchers say. “Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism. That’s consistent with a long line of research comparing humans to statistical tools.”

Incarceration rates

The United Sates has the highest incarceration rates in the world, with African Americans disproportionately affected.  The researchers believe that advanced risk assessment tools could play a key role in improving judicial decision making, especially in terms of determining which individuals can be rehabilitated in the community rather than in prison.

While such tools are widely used in the US, doubt was raised about them after a study from Dartmouth University highlighted the relatively poor performance of such systems, with accuracy rates of around 66% achieved. What’s more, the results were no better than the humans in the study achieved.

The results cast a wave of doubt over the use of technology in such important decisions, prompting many to argue that it should only be humans involved.

Where the Californian team believe this study went wrong, however, was in using a relatively small number of variables to describe the situation.  This not only produces poorer results, but also fails to accurately reflect the reality of criminal justice decisions.

“Pre-sentence investigation reports, attorney and victim impact statements, and an individual’s demeanor all add complex, inconsistent, risk-irrelevant, and potentially biasing information,” the authors explain.

Complex world

Not only is the real world complex, but it also often has to deal with a lot of noise that can distort decision making.  The researchers hypothesized that risk assessment systems would do better in such an environment, and built upon the five risk factors used in the Dartmouth study with an additional 10, including mental health, employment status and substance abuse.

They also used a richer methodology so that in some instances, volunteers would not be given feedback on the accuracy of their predictions, as this more accurately reflects what happens to judges.

This resulted in humans performing consistently worse than the risk assessment tool, especially on complex cases where immediate feedback wasn’t available to help guide future decisions.  This resulted in accurate predictions in 89% of the examples, versus just 60% for humans.

The team believe their findings highlight that risk assessment technology can still have value in the judicial process, especially as a decision support tool for judges who usually make the ultimate decision.

Facebooktwitterredditpinterestlinkedinmail