ABSTRACT

Fingerprint evidence has been used to help identify criminals for more than a century. Sir Francis Galton is credited with the first extensive analysis of fingerprint variability. However, with the increased use of statistics and probability theory to support and report inferences on the source of biological material recovered at crime scenes in the 1980s and 1990s, there has been a demand from scientific and legal scholars for the application of similar methods to handle the uncertainty in the determination of the source of other types of evidence.

This chapter investigates the two main approaches that are currently advocated for supporting the conclusions of fingerprint examinations: Bayesian inference and error rates. The findings of this chapter show that most ad-hoc methods offered to support Bayesian inference in fingerprint examination may have some merit as deterministic decision tools; however, the use of these methods within a Bayesian paradigm is not appropriate: so-called score-based likelihood ratios cannot be used to update prior beliefs on the source of a finger impression as part of Bayesian reasoning. The error-rate (“black-box”) studies performed during the past decade inform the community on the magnitude of the expected rates of erroneous identifications and exclusions. Unfortunately, it is very difficult to relate these community-wide expected error rates to the risk of error in a specific case, as that risk will depend on the quality of the impression, the appropriateness of the examination and documentation procedures established by a specific laboratory, as well as the competency of the examiner performing the examination. In conclusion, we find that the foundations of fingerprint examination are much stronger now than a decade ago, but much is left to be done in terms of providing tools for data-driven decision-making in individual cases.