Loading…

Leveraging recent advances in deep learning for audio-Visual emotion recognition

•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2021-06, Vol.146, p.1-7
Main Authors: Schoneveld, Liam, Othmani, Alice, Abdelkawy, Hazem
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A new high-performing deep neural network-based approach for AudioVisual Emotion Recognition (AVER).•Learning two independent feature extractors specialised for emotion recognition.•Learning two independent feature extractors that could be employed for any downstream audiovisual emotion recognition task.•Applying knowledge distillation (specifically, self-distillation), alongside additional unlabeled data for FER.•Learning the spatio-temporal dynamics via a recurrent neural network for AVER. Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2021.03.007