Loading…

A novel phase congruency based descriptor for dynamic facial expression analysis

•New feature extraction to describe a dynamic event, providing both temporal and spatial information.•Capability of the spatio-temporal descriptor to detect the motion patterns in a video.•Able to deal with different image resolution and illumination conditions.•High performance on the facial emotio...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2014-11, Vol.49, p.55-61
Main Authors: Shojaeilangari, Seyedehsamaneh, Yau, Wei-Yun, Teoh, Eam-Khwang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•New feature extraction to describe a dynamic event, providing both temporal and spatial information.•Capability of the spatio-temporal descriptor to detect the motion patterns in a video.•Able to deal with different image resolution and illumination conditions.•High performance on the facial emotion recognition task. Representation and classification of dynamic visual events in videos have been an active field of research. This work proposed a novel spatio-temporal descriptor based on phase congruency concept and applied it to recognize facial expression from video sequences. The proposed descriptor comprises histograms of dominant phase congruency over multiple 3D orientations to describe both spatial and temporal information of a dynamic event. The advantages of our proposed approach are local and dynamic processing, high accuracy, robustness to image scale variation, and illumination changes. We validated the performance of our proposed approach using the Cohn-Kanade (CK+) database where we achieved 95.44% accuracy in detecting six basic emotions. The approach was also shown to increase classification rates over the baseline results for the AVEC 2011 video subchallenge in detecting four emotion dimensions. We also validated its robustness to illumination and scale variation using our own collected dataset.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2014.06.009