Fingerprints have been used as evidence in criminal trials for over 100 years now, with the first case in the United States taking place in 1911. Whilst for much of those 100 or so years the fingerprint has been seen as a pretty infallible item of evidence, cracks have begun to appear in recent times.
A team from the National Institute of Standards and Technology (NIST) and Michigan State University have developed an algorithm to automatically check fingerprints in an attempt to remove the potential for errors.
“We know that when humans analyze a crime scene fingerprint, the process is inherently subjective,” the authors say in a recently published paper. “By reducing the human subjectivity, we can make fingerprint analysis more reliable and more efficient.”
Variable quality
Fingerprint analysis is made difficult by the variable quality of the prints found at crime scenes. They are usually significantly poorer quality than those produced at police stations, and often partial, distorted or smudged.
Often, therefore, the first step is to actually figure out what can and cannot be used. It’s this step that the team hope to automate with their algorithm, with the system then submitting any approved prints to an Automated Fingerprint Identification System, which then searchers the database for potential matches that the investigators can examine.
It makes what was previously a very subjective process very consistent and repeatable, whilst also making things considerably more efficient. With a considerable backlog, this should make police forces more effective at tackling crimes.
Machine learning was used to train the algorithm, with 31 human experts analyzing hundreds of prints to provide each with a quality score with which to train the system.
The system was then put to the test on a new series of fingerprints, with the scores submitted to a database of over 250,000 prints, within which there was a match for the test prints given to the system to analyze.
Despite a relatively small dataset to train the algorithm, it still managed to outperform the human experts used in the study, albeit by a relatively minor amount. The researchers hope to provide the system with even more data to improve the performance of the algorithm still further, but they need cooperation from police forces to do this. For this particular study, the team worked with Michigan State Police, who provided them with the data having first removed any identifying data from the fingerprints, thus preventing any privacy concerns.
“We’ve run our algorithm against a database of 250,000 prints, but we need to run it against millions,” the authors conclude. “An algorithm like this has to be extremely reliable, because lives and liberty are at stake.”
It’s an interesting technology, and it will be fascinating to see the progress the project makes.