Loading…

A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition

•A real-time mental healthcare system is designed in this article.•Multimodal emotion data sets, features extraction, fusion strategies are discussed.•Help clearly understand scientific problems and future research directions. With the rapid development of artificial intelligence and mobile Internet...

Full description

Saved in:
Bibliographic Details
Published in:Information fusion 2020-01, Vol.53, p.209-221
Main Authors: Jiang, Yingying, Li, Wei, Hossain, M. Shamim, Chen, Min, Alelaiwi, Abdulhameed, Al-Hammadi, Muneer
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A real-time mental healthcare system is designed in this article.•Multimodal emotion data sets, features extraction, fusion strategies are discussed.•Help clearly understand scientific problems and future research directions. With the rapid development of artificial intelligence and mobile Internet, the new requirements for human-computer interaction have been put forward. The personalized emotional interaction service is a new trend in the human-computer interaction field. As a basis of emotional interaction, emotion recognition has also introduced many new advances with the development of artificial intelligence. The current research on emotion recognition mostly focuses on single-modal recognition such as expression recognition, speech recognition, limb recognition, and physiological signal recognition. However, the lack of the single-modal emotional information and vulnerability to various external factors lead to lower accuracy of emotion recognition. Therefore, multimodal information fusion for data-driven emotion recognition has been attracting the attention of researchers in the affective computing filed. This paper reviews the development background and hot spots of the data-driven multimodal emotion information fusion. Considering the real-time mental health monitoring system, the current development of multimodal emotion data sets, the multimodal features extraction, including the EEG, speech, expression, text features, and multimodal fusion strategies and recognition methods are discussed and summarized in detail. The main objective of this work is to present a clear explanation of the scientific problems and future research directions in the multimodal information fusion for data-driven emotion recognition field.
ISSN:1566-2535
1872-6305
DOI:10.1016/j.inffus.2019.06.019