Loading…

Sparse Logistic Regression With L1/2 Penalty for Emotion Recognition in Electroencephalography Classification

Emotion recognition based on electroencephalography (EEG) signals is a current focus in brain-computer interface research. However, the classification of EEG is difficult owing to the large amounts of data and high levels of noise. Therefore, it is important to determine how to effectively extract f...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in neuroinformatics 2020-08, Vol.14
Main Authors: Chen, Dong-Wei, Miao, Rui, Deng, Zhao-Yong, Lu, Yue-Yue, Liang, Yong, Huang, Lan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emotion recognition based on electroencephalography (EEG) signals is a current focus in brain-computer interface research. However, the classification of EEG is difficult owing to the large amounts of data and high levels of noise. Therefore, it is important to determine how to effectively extract features that include important information. Regularization, one of the effective methods for EEG signal processing, can effectively extract important features from the signal and has potential applications in EEG emotion recognition. Currently, the most popular regularization technique is Lasso (L_1) and L2. In recent years, researchers have proposed many other regularization terms. In theory, Lq-type regularization has a lower $q$ value, which means that it can be used to find solutions with better sparsity. L1/2 regularization is of L_q type (0 < q < 1) and has been shown to have many attractive properties. In this work, we studied the L_1/2 penalty in sparse logistic regression for three-classification EEG emotion recognition, and used a coordinate descent algorithm and a univariate semi-threshold operator to implement L1/2 penalty logistic regression. The experimental results on simulation and real data demonstrate that our proposed method is better than other existing regularization methods. Sparse logistic regression with L1/2 penalty achieves higher classification accuracy than the conventional L1, L2, and elastic network regularization methods, using fewer but more informative EEG signals. This is very important for high-dimensional small-sample EEG data, and can help researchers to reduce computational complexity and improve computational accuracy. Therefore, we propose that sparse logistic regression with the L1/2 penalty is an effective technique for emotion recognition in practical classification problems.
ISSN:1662-5196
1662-5196
DOI:10.3389/fninf.2020.00029