Loading…

Discriminative feature learning-based pixel difference representation for facial expression recognition

Recently, researchers have proposed different feature descriptors to achieve robust performance for facial expression recognition (FER). However, finding a discriminative feature descriptor remains one of the critical tasks. In this paper, we propose a discriminative feature learning scheme to impro...

Full description

Saved in:
Bibliographic Details
Published in:IET computer vision 2017-12, Vol.11 (8), p.675-682
Main Authors: Sun, Zhe, Hu, Zheng-Ping, Wang, Meng, Zhao, Shu-Huan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, researchers have proposed different feature descriptors to achieve robust performance for facial expression recognition (FER). However, finding a discriminative feature descriptor remains one of the critical tasks. In this paper, we propose a discriminative feature learning scheme to improve the representation power of expressions. First, we obtain a discriminative feature matrix (DFM) based pixel difference representation. Subsequently, all DFMs corresponding to the training samples are used to construct a discriminative feature dictionary (DFD). Next, DFD is projected on a vertical two-dimensional linear discriminant analysis in direction (V-2DLDA) space to compute between and within-class scatter because V-2DLDA works well with the DFD in matrix representation and achieves good efficiency. Finally, nearest neighbor (NN) classifier is used to determine the labels of the query samples. DFD represents the local feature changes that are robust to the expression, illumination et al. Besides, we exploit V-2DLDA to find an optimal projection matrix since it not only protects the discriminative features but reduces the dimensions. The proposed method achieves satisfying recognition results, reaching accuracy rates as high as 91.87% on CK+ database, 82.24% on KDEF database, and 78.94% on CMU Multi-PIE database in the LOSO scenario, which perform better than other comparison methods.
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2016.0505