Loading…

Characterizing Human Expertise Using Computational Metrics of Feature Diagnosticity in a Pattern Matching Task

Forensic evidence often involves an evaluation of whether two impressions were made by the same source, such as whether a fingerprint from a crime scene has detail in agreement with an impression taken from a suspect. Human experts currently outperform computer‐based comparison systems, but the stre...

Full description

Saved in:
Bibliographic Details
Published in:Cognitive science 2017-09, Vol.41 (7), p.1716-1759
Main Authors: Busey, Thomas, Nikolov, Dimitar, Yu, Chen, Emerick, Brandi, Vanderkolk, John
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Forensic evidence often involves an evaluation of whether two impressions were made by the same source, such as whether a fingerprint from a crime scene has detail in agreement with an impression taken from a suspect. Human experts currently outperform computer‐based comparison systems, but the strength of the evidence exemplified by the observed detail in agreement must be evaluated against the possibility that some other individual may have created the crime scene impression. Therefore, the strongest evidence comes from features in agreement that are also not shared with other impressions from other individuals. We characterize the nature of human expertise by applying two extant metrics to the images used in a fingerprint recognition task and use eye gaze data from experts to both tune and validate the models. The Attention via Information Maximization (AIM) model (Bruce & Tsotsos, 2009) quantifies the rarity of regions in the fingerprints to determine diagnosticity for purposes of excluding alternative sources. The CoVar model (Karklin & Lewicki, 2009) captures relationships between low‐level features, mimicking properties of the early visual system. Both models produced classification and generalization performance in the 75%–80% range when classifying where experts tend to look. A validation study using regions identified by the AIM model as diagnostic demonstrates that human experts perform better when given regions of high diagnosticity. The computational nature of the metrics may help guard against wrongful convictions, as well as provide a quantitative measure of the strength of evidence in casework.
ISSN:0364-0213
1551-6709
DOI:10.1111/cogs.12452