Loading…

Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition

Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches includ...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on multimedia 2022-01, Vol.24, p.1313-1324
Main Authors: Nguyen, Dung, Nguyen, Duc Thanh, Zeng, Rui, Nguyen, Thanh Thi, N. Tran, Son, Nguyen, Thin, Sridharan, Sridha, Fookes, Clinton
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043
cites cdi_FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043
container_end_page 1324
container_issue
container_start_page 1313
container_title IEEE transactions on multimedia
container_volume 24
creator Nguyen, Dung
Nguyen, Duc Thanh
Zeng, Rui
Nguyen, Thanh Thi
N. Tran, Son
Nguyen, Thin
Sridharan, Sridha
Fookes, Clinton
description Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.
doi_str_mv 10.1109/TMM.2021.3063612
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9374787</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9374787</ieee_id><sourcerecordid>2645245373</sourcerecordid><originalsourceid>FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFbvgpeA59TZj2Szx9LWD2gRtCJ4WZLspG5psnU3Ofjv3dLiaR6G9x2Gh5BbChNKQT2sV6sJA0YnHHKeU3ZGRlQJmgJIeR45Y5AqRuGSXIWwBaAiAzkiX3PEfTIdepcuutoZ9CH5tP138o4_A3a9LXfJEkvf2W6TNM4nq2HX29aZuJ_bFrtgXRd50bo-UvKGtdt09sDX5KIpdwFvTnNMPh4X69lzunx9eplNl2nNOe_TUpmsYcoUBSpRC8ErUSlgouKFASzrImNVYXgtjZA5zUojsapVYfJGClGB4GNyf7y79y7-HHq9dYOPTwXNcpExkXHJYwqOqdq7EDw2eu9tW_pfTUEfDOpoUB8M6pPBWLk7Viwi_scVl0IWkv8BU4Js4g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2645245373</pqid></control><display><type>article</type><title>Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition</title><source>IEEE Xplore (Online service)</source><creator>Nguyen, Dung ; Nguyen, Duc Thanh ; Zeng, Rui ; Nguyen, Thanh Thi ; N. Tran, Son ; Nguyen, Thin ; Sridharan, Sridha ; Fookes, Clinton</creator><creatorcontrib>Nguyen, Dung ; Nguyen, Duc Thanh ; Zeng, Rui ; Nguyen, Thanh Thi ; N. Tran, Son ; Nguyen, Thin ; Sridharan, Sridha ; Fookes, Clinton</creatorcontrib><description>Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2021.3063612</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Affective computing ; Artificial neural networks ; Auto-encoder ; Coders ; Computer architecture ; Convolution ; dimensional emotion recognition ; Emotion recognition ; Emotions ; Feature extraction ; Long short term memory ; Machine learning ; multimodal emotion recognition ; Short term ; Streaming media ; Streams ; Two dimensional displays ; Visual signals ; Visualization</subject><ispartof>IEEE transactions on multimedia, 2022-01, Vol.24, p.1313-1324</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043</citedby><cites>FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043</cites><orcidid>0000-0001-9709-1663 ; 0000-0003-4316-9001 ; 0000-0002-8515-6324 ; 0000-0002-2285-2066 ; 0000-0003-3467-8963</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9374787$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Nguyen, Dung</creatorcontrib><creatorcontrib>Nguyen, Duc Thanh</creatorcontrib><creatorcontrib>Zeng, Rui</creatorcontrib><creatorcontrib>Nguyen, Thanh Thi</creatorcontrib><creatorcontrib>N. Tran, Son</creatorcontrib><creatorcontrib>Nguyen, Thin</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><title>Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.</description><subject>Affective computing</subject><subject>Artificial neural networks</subject><subject>Auto-encoder</subject><subject>Coders</subject><subject>Computer architecture</subject><subject>Convolution</subject><subject>dimensional emotion recognition</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Long short term memory</subject><subject>Machine learning</subject><subject>multimodal emotion recognition</subject><subject>Short term</subject><subject>Streaming media</subject><subject>Streams</subject><subject>Two dimensional displays</subject><subject>Visual signals</subject><subject>Visualization</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kE1Lw0AQhhdRsFbvgpeA59TZj2Szx9LWD2gRtCJ4WZLspG5psnU3Ofjv3dLiaR6G9x2Gh5BbChNKQT2sV6sJA0YnHHKeU3ZGRlQJmgJIeR45Y5AqRuGSXIWwBaAiAzkiX3PEfTIdepcuutoZ9CH5tP138o4_A3a9LXfJEkvf2W6TNM4nq2HX29aZuJ_bFrtgXRd50bo-UvKGtdt09sDX5KIpdwFvTnNMPh4X69lzunx9eplNl2nNOe_TUpmsYcoUBSpRC8ErUSlgouKFASzrImNVYXgtjZA5zUojsapVYfJGClGB4GNyf7y79y7-HHq9dYOPTwXNcpExkXHJYwqOqdq7EDw2eu9tW_pfTUEfDOpoUB8M6pPBWLk7Viwi_scVl0IWkv8BU4Js4g</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Nguyen, Dung</creator><creator>Nguyen, Duc Thanh</creator><creator>Zeng, Rui</creator><creator>Nguyen, Thanh Thi</creator><creator>N. Tran, Son</creator><creator>Nguyen, Thin</creator><creator>Sridharan, Sridha</creator><creator>Fookes, Clinton</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9709-1663</orcidid><orcidid>https://orcid.org/0000-0003-4316-9001</orcidid><orcidid>https://orcid.org/0000-0002-8515-6324</orcidid><orcidid>https://orcid.org/0000-0002-2285-2066</orcidid><orcidid>https://orcid.org/0000-0003-3467-8963</orcidid></search><sort><creationdate>20220101</creationdate><title>Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition</title><author>Nguyen, Dung ; Nguyen, Duc Thanh ; Zeng, Rui ; Nguyen, Thanh Thi ; N. Tran, Son ; Nguyen, Thin ; Sridharan, Sridha ; Fookes, Clinton</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Affective computing</topic><topic>Artificial neural networks</topic><topic>Auto-encoder</topic><topic>Coders</topic><topic>Computer architecture</topic><topic>Convolution</topic><topic>dimensional emotion recognition</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Long short term memory</topic><topic>Machine learning</topic><topic>multimodal emotion recognition</topic><topic>Short term</topic><topic>Streaming media</topic><topic>Streams</topic><topic>Two dimensional displays</topic><topic>Visual signals</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Dung</creatorcontrib><creatorcontrib>Nguyen, Duc Thanh</creatorcontrib><creatorcontrib>Zeng, Rui</creatorcontrib><creatorcontrib>Nguyen, Thanh Thi</creatorcontrib><creatorcontrib>N. Tran, Son</creatorcontrib><creatorcontrib>Nguyen, Thin</creatorcontrib><creatorcontrib>Sridharan, Sridha</creatorcontrib><creatorcontrib>Fookes, Clinton</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Dung</au><au>Nguyen, Duc Thanh</au><au>Zeng, Rui</au><au>Nguyen, Thanh Thi</au><au>N. Tran, Son</au><au>Nguyen, Thin</au><au>Sridharan, Sridha</au><au>Fookes, Clinton</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2022-01-01</date><risdate>2022</risdate><volume>24</volume><spage>1313</spage><epage>1324</epage><pages>1313-1324</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area. However, several questions still remain unanswered for most of existing approaches including: (i) how to simultaneously learn compact yet representative features from multimodal data, (ii) how to effectively capture complementary features from multimodal streams, and (iii) how to perform all the tasks in an end-to-end manner. To address these challenges, in this paper, we propose a novel deep neural network architecture consisting of a two-stream auto-encoder and a long short term memory for effectively integrating visual and audio signal streams for emotion recognition. To validate the robustness of our proposed architecture, we carry out extensive experiments on the multimodal emotion in the wild dataset: RECOLA. Experimental results show that the proposed method achieves state-of-the-art recognition performance.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2021.3063612</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9709-1663</orcidid><orcidid>https://orcid.org/0000-0003-4316-9001</orcidid><orcidid>https://orcid.org/0000-0002-8515-6324</orcidid><orcidid>https://orcid.org/0000-0002-2285-2066</orcidid><orcidid>https://orcid.org/0000-0003-3467-8963</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1520-9210
ispartof IEEE transactions on multimedia, 2022-01, Vol.24, p.1313-1324
issn 1520-9210
1941-0077
language eng
recordid cdi_ieee_primary_9374787
source IEEE Xplore (Online service)
subjects Affective computing
Artificial neural networks
Auto-encoder
Coders
Computer architecture
Convolution
dimensional emotion recognition
Emotion recognition
Emotions
Feature extraction
Long short term memory
Machine learning
multimodal emotion recognition
Short term
Streaming media
Streams
Two dimensional displays
Visual signals
Visualization
title Deep Auto-Encoders With Sequential Learning for Multimodal Dimensional Emotion Recognition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T21%3A35%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Auto-Encoders%20With%20Sequential%20Learning%20for%20Multimodal%20Dimensional%20Emotion%20Recognition&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Nguyen,%20Dung&rft.date=2022-01-01&rft.volume=24&rft.spage=1313&rft.epage=1324&rft.pages=1313-1324&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2021.3063612&rft_dat=%3Cproquest_ieee_%3E2645245373%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c333t-a9d5f29d88e94c443b4b9024b38d0eac852b8d3c7d47615ad7ebc98d6f744b043%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2645245373&rft_id=info:pmid/&rft_ieee_id=9374787&rfr_iscdi=true