Loading…

Data-Driven Offline Optimization of Deep CNN models for EEG and ECoG Decoding

A better understanding of ElectroEncephaloGraphy (EEG) and ElectroCorticoGram (ECoG) signals would get us closer to comprehending brain functionality, creating new avenues for treating brain abnormalities and developing novel Brain-Computer Interface (BCI)-related applications. Deep Convolutional Ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Tragoudaras, Antonios, Fanaras, Konstantinos, Antoniadis, Charalampos, Massoud, Yehia
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A better understanding of ElectroEncephaloGraphy (EEG) and ElectroCorticoGram (ECoG) signals would get us closer to comprehending brain functionality, creating new avenues for treating brain abnormalities and developing novel Brain-Computer Interface (BCI)-related applications. Deep Convolutional Neural Networks (deep CNNs) have lately been employed with remarkable success to decode EEG/ECoG signals. However, the optimal architectural/training parameter values in these deep CNN architectures have received little attention. In addition, new data-driven optimization methodologies that leverage significant advancements in Machine Learning, such as the Transformer model, have recently been proposed. Because an exhaustive search on all possible architectural/training parameter values of the state-of-the-art deep CNN model (our baseline model) decoding the motor imagery EEG and finger tension ECoG signals comprising the BCI IV 2a and 4 datasets, respectively, would require prohibitively much time, this paper proposes a model-based optimization technique based on the Transformer model for the discovery of the optimal architectural/training parameter values for that model. Our findings indicate that we could pick better values for the architectural/training parameters of the baseline model, enhancing the accuracy of the baseline model by 3.4% in the BCI IV 2a dataset and by 29.8% in the BCI IV 4 dataset.
ISSN:2158-1525
DOI:10.1109/ISCAS46773.2023.10181761