Loading…
Combining generative adversarial networks and multi-output CNN for motor imagery classification
Motor imagery (MI) classification is an important task in the brain-computer interface (BCI) field. MI data exhibit highly dynamic characteristics and are difficult to obtain. Therefore, the performance of the classification model will be challenged. Recently, convolutional neural networks (CNNs) ha...
Saved in:
Published in: | Journal of neural engineering 2021-08, Vol.18 (4), p.046026-046026 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Motor imagery (MI) classification is an important task in the brain-computer interface (BCI) field. MI data exhibit highly dynamic characteristics and are difficult to obtain. Therefore, the performance of the classification model will be challenged. Recently, convolutional neural networks (CNNs) have been employed for MI classification and have demonstrated favorable performances. However, the traditional CNN model uses an end-to-end output method, and part of the feature information is discarded during the transmission process.
Herein, we propose a novel algorithm, that is, a combined long short-term memory generative adversarial networks (LGANs) and multi-output convolutional neural network (MoCNN) for MI classification, and an attention network for improving model performance. Specifically, the proposed method comprises three steps. First, MI data are obtained, and preprocessing is performed. Second, additional data are generated for training. Here, a data augmentation method based on a LGAN is designed. Last, the MoCNN is proposed to improve the classification performance.
The BCI competition IV datasets 2a and 2b are employed to evaluate the performance of the proposed method. With multiple evaluation indicators, the proposed generative model can generate more realistic data. The expanded training set improves the performance of the classification model.
The results show that the proposed method improves the classification of MI data, which facilitates motion imagination. |
---|---|
ISSN: | 1741-2552 |
DOI: | 10.1088/1741-2552/abecc5 |