Loading…

Improving Generalized Zero-Shot Learning SSVEP Classification Performance From Data-Efficient Perspective

Generalized zero-shot learning (GZSL) has significantly reduced the training requirements for steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). Traditional methods require complete class data sets for training, but GZSL allows for only partial class data sets, divi...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on neural systems and rehabilitation engineering 2023, Vol.31, p.4135-4145
Main Authors: Wang, Xietian, Liu, Aiping, Wu, Le, Guan, Ling, Chen, Xun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Generalized zero-shot learning (GZSL) has significantly reduced the training requirements for steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). Traditional methods require complete class data sets for training, but GZSL allows for only partial class data sets, dividing them into 'seen' (those with training data) and 'unseen' classes (those without training data). However, inefficient utilization of SSVEP data limits the accuracy and information transfer rate (ITR) of existing GZSL methods. To this end, we proposed a framework for more effective utilization of SSVEP data at three systematically combined levels: data acquisition, feature extraction, and decision-making. First, prevalent SSVEP-based BCIs overlook the inter-subject variance in visual latency and employ fixed sampling starting time (SST). We introduced a dynamic sampling starting time (DSST) strategy at the data acquisition level. This strategy uses the classification results on the validation set to find the optimal sampling starting time (OSST) for each subject. In addition, we developed a Transformer structure to capture the global information of input data for compensating the small receptive field of existing networks. The global receptive fields of the Transformer can adequately process the information from longer input sequences. For the decision-making level, we designed a classifier selection strategy that can automatically select the optimal classifier for the seen and unseen classes, respectively. We also proposed a training procedure to make the above solutions in conjunction with each other. Our method was validated on three public datasets and outperformed the state-of-the-art (SOTA) methods. Crucially, we also outperformed the representative methods that require training data for all classes.
ISSN:1534-4320
1558-0210
DOI:10.1109/TNSRE.2023.3324148