Loading…

FacialSCDnet: A deep learning approach for the estimation of subject-to-camera distance in facial photographs

Facial biometrics play an essential role in the fields of law enforcement and forensic sciences. When comparing facial traits for human identification in photographs or videos, the analysis must account for several factors that impair the application of common identification techniques, such as illu...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications 2022-12, Vol.210, p.118457, Article 118457
Main Authors: Bermejo, Enrique, Fernandez-Blanco, Enrique, Valsecchi, Andrea, Mesejo, Pablo, Ibáñez, Oscar, Imaizumi, Kazuhiko
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Facial biometrics play an essential role in the fields of law enforcement and forensic sciences. When comparing facial traits for human identification in photographs or videos, the analysis must account for several factors that impair the application of common identification techniques, such as illumination, pose, or expression. In particular, facial attributes can drastically change depending on the distance between the subject and the camera at the time of the picture. This effect is known as perspective distortion, which can severely affect the outcome of the comparative analysis. Hence, knowing the subject-to-camera distance of the original scene where the photograph was taken can help determine the degree of distortion, improve the accuracy of computer-aided recognition tools, and increase the reliability of human identification and further analyses. In this paper, we propose a deep learning approach to estimate the subject-to-camera distance of facial photographs: FacialSCDnet. Furthermore, we introduce a novel evaluation metric designed to guide the learning process, based on changes in facial distortion at different distances. To validate our proposal, we collected a novel dataset of facial photographs taken at several distances using both synthetic and real data. Our approach is fully automatic and can provide a numerical distance estimation for up to six meters, beyond which changes in facial distortion are not significant. The proposed method achieves an accurate estimation, with an average error below 6 cm of subject-to-camera distance for facial photographs in any frontal or lateral head pose, robust to facial hair, glasses, and partial occlusion. •Accurate estimation of subject-to-camera distance in portrait photographs.•A novel metric is proposed, based on the effects of perspective in facial distortion.•A new database of facial images at a distance is introduced for human identification.•A transfer learning approach overcomes the limitations of current methods.•Robust to expression, occlusion and pose without requiring anatomical information.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2022.118457