Loading…

Robust Manifold Embedding for Face Recognition

Flexible manifold embedding (FME) has been recognized as an effective method for face recognition by integrating both class label information from labeled data and manifold structure information of all data. In order to achieve better performance, this particular method usually requires sufficient s...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2020, Vol.8, p.101224-101234
Main Authors: Liu, Zhonghua, Xiang, Lingyun, Shi, Kaiming, Zhang, Kaibing, Wu, Qingtao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Flexible manifold embedding (FME) has been recognized as an effective method for face recognition by integrating both class label information from labeled data and manifold structure information of all data. In order to achieve better performance, this particular method usually requires sufficient samples to make manifold smooth. However, it is often hard to provide enough samples for FME in practice. In view of facial symmetry, we utilize left/right mirror face images to address the deficiency of samples in manifold embedding. These mirror images enable to reflect variations of illuminations, or poses or both them that the original face images cannot provide. Therefore, we propose a robust manifold embedding (RME) algorithm in this paper, which can fully use the class label information and correctly capture the underlying manifold structure. The proposed RME algorithm integrates two complementary characteristics of the label fitness and the manifold smoothness. Moreover, the original face images and its left/right mirror images are jointly used in the learning of RME, which shows better robustness against the variations of both illuminations and poses. Extensive experiments on several public face databases demonstrate that the proposed RME algorithm is promising for higher recognition accuracy than other compared methods in reference.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2997953