Loading…

Deep learning with convolutional neural networks for EEG-based music emotion decoding and visualization

Purpose: Emotion is the reflection of individual's perception and understanding of various things, which needs the synergy of various brain regions. A large number of emotion decoding methods based on electroencephalogram (EEG) have been proposed. But extracting the most discriminative and cogn...

Full description

Saved in:
Bibliographic Details
Published in:Brain-apparatus communication 2022-12, Vol.1 (1), p.38-49
Main Authors: Qian, Wenxia, Tan, Jianling, Jiang, Yuhao, Tian, Yin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Purpose: Emotion is the reflection of individual's perception and understanding of various things, which needs the synergy of various brain regions. A large number of emotion decoding methods based on electroencephalogram (EEG) have been proposed. But extracting the most discriminative and cognitive features to construct a model is yet to be determined. This paper aims to construct a model that can extract the most discriminative and cognitive features. Materials and methods: Here, we collected EEG signals from 24 subjects in a musical emotion induction experiment. Then, an end-to-end branch LSTM-CNN (BLCNN) was used to extract emotion features from the laboratory dataset and DEAP dataset for emotion decoding. Finally, the extracted features were visualized on the laboratory dataset using saliency map. Result: The classification results showed that the accuracy of the three classification of the laboratory dataset was 95.78% ± 1.70%, and the accuracy of the four classification of the DEAP dataset was 80.97% ± 7.99%. We found that the discriminating features of positive emotion were distributed in the left hemisphere, at the same time, negative emotion features were distributed in the right hemisphere, where mainly in the frontal, parietal and occipital lobes. Conclusion: In this paper, we proposed a neural network model, namely BLCNN. The model obtained good results in laboratory dataset and DEAP dataset. Through the visual analysis of the features extracted by BLCNN, it was found that the features were consistent with emotional cognition. Therefore, this paper provided a new perspective for the practical application of human-computer emotional interaction.
ISSN:2770-6710
2770-6710
DOI:10.1080/27706710.2022.2075241