Loading…
Joint Discriminative Learning of Deep Dynamic Textures for 3D Mask Face Anti-Spoofing
Three-dimensional mask spoofing attacks have been one of the main challenges in face recognition. Compared with a 3D mask, a real face displays different facial motion patterns that are reflected by different facial dynamic textures. However, a large portion of these facial motion differences is sub...
Saved in:
Published in: | IEEE transactions on information forensics and security 2019-04, Vol.14 (4), p.923-938 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Three-dimensional mask spoofing attacks have been one of the main challenges in face recognition. Compared with a 3D mask, a real face displays different facial motion patterns that are reflected by different facial dynamic textures. However, a large portion of these facial motion differences is subtle. We find that the subtle facial motion can be fully captured by multiple deep dynamic textures from a convolutional layer of a convolutional neural network, but not all deep dynamic textures from different spatial regions and different channels of a convolutional layer are useful for differentiation of subtle motions between real faces and 3D masks. In this paper, we propose a novel feature learning model to learn discriminative deep dynamic textures for 3D mask face anti-spoofing. A novel joint discriminative learning strategy is further incorporated in the learning model to jointly learn the spatial- and channel-discriminability of the deep dynamic textures. The proposed joint discriminative learning strategy can be used to adaptively weight the discriminability of the learned feature from different spatial regions or channels, which ensures that more discriminative deep dynamic textures play more important roles in face/mask classification. Experiments on several publicly available data sets validate that the proposed method achieves promising results in intra- and cross-data set scenarios. |
---|---|
ISSN: | 1556-6013 1556-6021 |
DOI: | 10.1109/TIFS.2018.2868230 |