Loading…

Design and Evaluation of CPU-, GPU-, and FPGA-Based Deployment of a CNN for Motor Imagery Classification in Brain-Computer Interfaces

Brain–computer interfaces (BCIs) have gained popularity in recent years. Among noninvasive BCIs, EEG-based systems stand out as the primary approach, utilizing the motor imagery (MI) paradigm to discern movement intentions. Initially, BCIs were predominantly focused on nonembedded systems. However,...

Full description

Saved in:
Bibliographic Details
Published in:Electronics (Basel) 2024-05, Vol.13 (9), p.1646
Main Authors: Pacini, Federico, Pacini, Tommaso, Lai, Giuseppe, Zocco, Alessandro Michele, Fanucci, Luca
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Brain–computer interfaces (BCIs) have gained popularity in recent years. Among noninvasive BCIs, EEG-based systems stand out as the primary approach, utilizing the motor imagery (MI) paradigm to discern movement intentions. Initially, BCIs were predominantly focused on nonembedded systems. However, there is now a growing momentum towards shifting computation to the edge, offering advantages such as enhanced privacy, reduced transmission bandwidth, and real-time responsiveness. Despite this trend, achieving the desired target remains a work in progress. To illustrate the feasibility of this shift and quantify the potential benefits, this paper presents a comparison of deploying a CNN for MI classification across different computing platforms, namely, CPU-, embedded GPU-, and FPGA-based. For our case study, we utilized data from 29 participants included in a dataset acquired using an EEG cap for training the models. The FPGA solution emerged as the most efficient in terms of the power consumption–inference time product. Specifically, it delivers an impressive reduction of up to 89% in power consumption compared to the CPU and 71% compared to the GPU and up to a 98% reduction in memory footprint for model inference, albeit at the cost of a 39% increase in inference time compared to the GPU. Both the embedded GPU and FPGA outperform the CPU in terms of inference time.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics13091646