Loading…
EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments
In the application of wearable medical monitoring devices, EEG emotion recognition tasks need to be implemented in resource-constrained environments. Therefore, the proposed lightweight 1DCNN-BiLSTM network aims to achieve comparable emotion recognition accuracy to existing models while significantl...
Saved in:
Published in: | IEEE sensors journal 2025, p.1-1 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 1 |
container_issue | |
container_start_page | 1 |
container_title | IEEE sensors journal |
container_volume | |
creator | Liu, Haipeng Zhang, Shaolin Shi, Jiangyi Liu, Hongjin Zhang, Yuming Wu, Wenhao Li, Bin |
description | In the application of wearable medical monitoring devices, EEG emotion recognition tasks need to be implemented in resource-constrained environments. Therefore, the proposed lightweight 1DCNN-BiLSTM network aims to achieve comparable emotion recognition accuracy to existing models while significantly reducing computational costs and memory usage on resource-constrained devices. First, a low computational cost preprocessing method is used to eliminate the interference of baseline signals in the raw EEG signal. Second, a shallow hybrid network of 1DCNN-BiLSTM is proposed to extract spatial features between different channels and temporal forward-backward features in EEG signals. Finally, quantize the trained model to reduce memory consumption and replace floating-point operations with fixed-point operations. Experiments on DEAP and DREAMER datasets achieve more than 90% recognition accuracy. The memory usage of the quantized network is 17.4 KB, and the computation for a single classification is 1.7 MFLOPs. The model is ultimately deployed on an embedded processor, attaining an inference speed of 352.51 milliseconds, thereby enabling emotion recognition within resource-constrained environments. |
doi_str_mv | 10.1109/JSEN.2024.3514094 |
format | article |
fullrecord | <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10812798</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10812798</ieee_id><sourcerecordid>10_1109_JSEN_2024_3514094</sourcerecordid><originalsourceid>FETCH-LOGICAL-c638-f09746a64530e09b224029d0b160b1496eba0d1c1be6a22401bb23bc7e0e3fe13</originalsourceid><addsrcrecordid>eNpNkMFOwzAMhiMEEmPwAEgc8gIZcZI2zRFGGaBuSGwHLqhKW3cErQ1qyhBvz8p24GD7l-z_l_URcgl8AsDN9dMyXUwEF2oiI1DcqCMygihKGGiVHA9acqakfj0lZyF8cA5GR3pE3tJ0RtPG98639AVLv27dn946Sy3N3Pq9_8ahU7ibLhbs1mXL1ZzOfYUb6gZP8F9diSxzjeuxomm7dZ1vG2z7cE5OarsJeHGYY7K6T1fTB5Y9zx6nNxkrY5mwmhutYhur3Y_ITSGE4sJUvIB4V8rEWFheQQkFxnZYQlEIWZQaOcoaQY4J7GPLzofQYZ1_dq6x3U8OPB_w5AOefMCTH_DsPFd7j0PEf_cJCG0S-QsiUmB3</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments</title><source>IEEE Xplore (Online service)</source><creator>Liu, Haipeng ; Zhang, Shaolin ; Shi, Jiangyi ; Liu, Hongjin ; Zhang, Yuming ; Wu, Wenhao ; Li, Bin</creator><creatorcontrib>Liu, Haipeng ; Zhang, Shaolin ; Shi, Jiangyi ; Liu, Hongjin ; Zhang, Yuming ; Wu, Wenhao ; Li, Bin</creatorcontrib><description>In the application of wearable medical monitoring devices, EEG emotion recognition tasks need to be implemented in resource-constrained environments. Therefore, the proposed lightweight 1DCNN-BiLSTM network aims to achieve comparable emotion recognition accuracy to existing models while significantly reducing computational costs and memory usage on resource-constrained devices. First, a low computational cost preprocessing method is used to eliminate the interference of baseline signals in the raw EEG signal. Second, a shallow hybrid network of 1DCNN-BiLSTM is proposed to extract spatial features between different channels and temporal forward-backward features in EEG signals. Finally, quantize the trained model to reduce memory consumption and replace floating-point operations with fixed-point operations. Experiments on DEAP and DREAMER datasets achieve more than 90% recognition accuracy. The memory usage of the quantized network is 17.4 KB, and the computation for a single classification is 1.7 MFLOPs. The model is ultimately deployed on an embedded processor, attaining an inference speed of 352.51 milliseconds, thereby enabling emotion recognition within resource-constrained environments.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2024.3514094</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Bidirectional long short term memory ; Brain modeling ; Computational modeling ; Convolution ; convolutional neural network ; electroencephalogram(EEG) ; Electroencephalography ; Emotion recognition ; Feature extraction ; lightweight ; Logic gates ; Sensors</subject><ispartof>IEEE sensors journal, 2025, p.1-1</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-8465-9528 ; 0009-0001-6641-5204 ; 0000-0002-8587-0747</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10812798$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,4009,27902,27903,27904,54774</link.rule.ids></links><search><creatorcontrib>Liu, Haipeng</creatorcontrib><creatorcontrib>Zhang, Shaolin</creatorcontrib><creatorcontrib>Shi, Jiangyi</creatorcontrib><creatorcontrib>Liu, Hongjin</creatorcontrib><creatorcontrib>Zhang, Yuming</creatorcontrib><creatorcontrib>Wu, Wenhao</creatorcontrib><creatorcontrib>Li, Bin</creatorcontrib><title>EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>In the application of wearable medical monitoring devices, EEG emotion recognition tasks need to be implemented in resource-constrained environments. Therefore, the proposed lightweight 1DCNN-BiLSTM network aims to achieve comparable emotion recognition accuracy to existing models while significantly reducing computational costs and memory usage on resource-constrained devices. First, a low computational cost preprocessing method is used to eliminate the interference of baseline signals in the raw EEG signal. Second, a shallow hybrid network of 1DCNN-BiLSTM is proposed to extract spatial features between different channels and temporal forward-backward features in EEG signals. Finally, quantize the trained model to reduce memory consumption and replace floating-point operations with fixed-point operations. Experiments on DEAP and DREAMER datasets achieve more than 90% recognition accuracy. The memory usage of the quantized network is 17.4 KB, and the computation for a single classification is 1.7 MFLOPs. The model is ultimately deployed on an embedded processor, attaining an inference speed of 352.51 milliseconds, thereby enabling emotion recognition within resource-constrained environments.</description><subject>Accuracy</subject><subject>Bidirectional long short term memory</subject><subject>Brain modeling</subject><subject>Computational modeling</subject><subject>Convolution</subject><subject>convolutional neural network</subject><subject>electroencephalogram(EEG)</subject><subject>Electroencephalography</subject><subject>Emotion recognition</subject><subject>Feature extraction</subject><subject>lightweight</subject><subject>Logic gates</subject><subject>Sensors</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNpNkMFOwzAMhiMEEmPwAEgc8gIZcZI2zRFGGaBuSGwHLqhKW3cErQ1qyhBvz8p24GD7l-z_l_URcgl8AsDN9dMyXUwEF2oiI1DcqCMygihKGGiVHA9acqakfj0lZyF8cA5GR3pE3tJ0RtPG98639AVLv27dn946Sy3N3Pq9_8ahU7ibLhbs1mXL1ZzOfYUb6gZP8F9diSxzjeuxomm7dZ1vG2z7cE5OarsJeHGYY7K6T1fTB5Y9zx6nNxkrY5mwmhutYhur3Y_ITSGE4sJUvIB4V8rEWFheQQkFxnZYQlEIWZQaOcoaQY4J7GPLzofQYZ1_dq6x3U8OPB_w5AOefMCTH_DsPFd7j0PEf_cJCG0S-QsiUmB3</recordid><startdate>2025</startdate><enddate>2025</enddate><creator>Liu, Haipeng</creator><creator>Zhang, Shaolin</creator><creator>Shi, Jiangyi</creator><creator>Liu, Hongjin</creator><creator>Zhang, Yuming</creator><creator>Wu, Wenhao</creator><creator>Li, Bin</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-8465-9528</orcidid><orcidid>https://orcid.org/0009-0001-6641-5204</orcidid><orcidid>https://orcid.org/0000-0002-8587-0747</orcidid></search><sort><creationdate>2025</creationdate><title>EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments</title><author>Liu, Haipeng ; Zhang, Shaolin ; Shi, Jiangyi ; Liu, Hongjin ; Zhang, Yuming ; Wu, Wenhao ; Li, Bin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c638-f09746a64530e09b224029d0b160b1496eba0d1c1be6a22401bb23bc7e0e3fe13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Accuracy</topic><topic>Bidirectional long short term memory</topic><topic>Brain modeling</topic><topic>Computational modeling</topic><topic>Convolution</topic><topic>convolutional neural network</topic><topic>electroencephalogram(EEG)</topic><topic>Electroencephalography</topic><topic>Emotion recognition</topic><topic>Feature extraction</topic><topic>lightweight</topic><topic>Logic gates</topic><topic>Sensors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haipeng</creatorcontrib><creatorcontrib>Zhang, Shaolin</creatorcontrib><creatorcontrib>Shi, Jiangyi</creatorcontrib><creatorcontrib>Liu, Hongjin</creatorcontrib><creatorcontrib>Zhang, Yuming</creatorcontrib><creatorcontrib>Wu, Wenhao</creatorcontrib><creatorcontrib>Li, Bin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Haipeng</au><au>Zhang, Shaolin</au><au>Shi, Jiangyi</au><au>Liu, Hongjin</au><au>Zhang, Yuming</au><au>Wu, Wenhao</au><au>Li, Bin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2025</date><risdate>2025</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>In the application of wearable medical monitoring devices, EEG emotion recognition tasks need to be implemented in resource-constrained environments. Therefore, the proposed lightweight 1DCNN-BiLSTM network aims to achieve comparable emotion recognition accuracy to existing models while significantly reducing computational costs and memory usage on resource-constrained devices. First, a low computational cost preprocessing method is used to eliminate the interference of baseline signals in the raw EEG signal. Second, a shallow hybrid network of 1DCNN-BiLSTM is proposed to extract spatial features between different channels and temporal forward-backward features in EEG signals. Finally, quantize the trained model to reduce memory consumption and replace floating-point operations with fixed-point operations. Experiments on DEAP and DREAMER datasets achieve more than 90% recognition accuracy. The memory usage of the quantized network is 17.4 KB, and the computation for a single classification is 1.7 MFLOPs. The model is ultimately deployed on an embedded processor, attaining an inference speed of 352.51 milliseconds, thereby enabling emotion recognition within resource-constrained environments.</abstract><pub>IEEE</pub><doi>10.1109/JSEN.2024.3514094</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-8465-9528</orcidid><orcidid>https://orcid.org/0009-0001-6641-5204</orcidid><orcidid>https://orcid.org/0000-0002-8587-0747</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2025, p.1-1 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_ieee_primary_10812798 |
source | IEEE Xplore (Online service) |
subjects | Accuracy Bidirectional long short term memory Brain modeling Computational modeling Convolution convolutional neural network electroencephalogram(EEG) Electroencephalography Emotion recognition Feature extraction lightweight Logic gates Sensors |
title | EEG Emotion Recognition via a Lightweight 1DCNN-BiLSTM Model in Resource-Limited Environments |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T13%3A39%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EEG%20Emotion%20Recognition%20via%20a%20Lightweight%201DCNN-BiLSTM%20Model%20in%20Resource-Limited%20Environments&rft.jtitle=IEEE%20sensors%20journal&rft.au=Liu,%20Haipeng&rft.date=2025&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2024.3514094&rft_dat=%3Ccrossref_ieee_%3E10_1109_JSEN_2024_3514094%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c638-f09746a64530e09b224029d0b160b1496eba0d1c1be6a22401bb23bc7e0e3fe13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10812798&rfr_iscdi=true |