Loading…
Multimodal emotion recognition model via hybrid model with improved feature level fusion on facial and EEG feature set
In recent years, academics have placed a high value on multi-modal emotion identification, as well as extensive research has been conducted in the areas of video, text, voice, and physical signal emotion detection. This paper proposes a novel multimodal emotion recognition model that employs a hybri...
Saved in:
Published in: | Multimedia tools and applications 2025, Vol.84 (1), p.1-36 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, academics have placed a high value on multi-modal emotion identification, as well as extensive research has been conducted in the areas of video, text, voice, and physical signal emotion detection. This paper proposes a novel multimodal emotion recognition model that employs a hybrid model with AMIG-based feature fusion on facial and EEG feature sets. The EEG signal and facial image are subjected to preprocessing to remove unwanted background noises with the Butterworth filter and Viola Jones Algorithm, respectively. While considering the pre-processed EEG signal, features such as EWFS-transform, wavelet features, and CSP-based features are extracted. In the proposed EWFS-transform, the window function is modified by using the frequency function and updated STFT. Conversely, the features including SE-AMM-EST-based features; LGXP and GLCM are extracted while considering the preprocessed face image. In the proposed SE-AMM-EST-based features, the mean of shape is updated and covariance in PCA for extracting texture-based features. In order to extract redundant free essential features, AMIG-based feature fusion is proposed. Further, the fused features are fed into a proposed hybrid model that includes LSTM and AML-CNN models. |
---|---|
ISSN: | 1573-7721 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-024-19171-2 |