Loading…

Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition

Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches includ...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia 2022-01, Vol.24, p.1313-1324
Main Authors: Nguyen, Dung, Nguyen, Duc Thanh, Zeng, Rui, Nguyen, Thanh Thi, N. Tran, Son, Nguyen, Thin, Sridharan, Sridha, Fookes, Clinton
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2021.3063612