Loading…

Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition

•We proposed a D-UDA method for cross-subject EEG-based imagined speech recognition.•A novel loss is introduced to refine decision boundaries from target subject data.•The proposed method may build an effective classifier over a target subject.•Our proposal outperforms to other D-UDA methods on two...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2021-01, Vol.141, p.54-60
Main Authors: Jiménez-Guarneros, Magdiel, Gómez-Gil, Pilar
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3
cites cdi_FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3
container_end_page 60
container_issue
container_start_page 54
container_title Pattern recognition letters
container_volume 141
creator Jiménez-Guarneros, Magdiel
Gómez-Gil, Pilar
description •We proposed a D-UDA method for cross-subject EEG-based imagined speech recognition.•A novel loss is introduced to refine decision boundaries from target subject data.•The proposed method may build an effective classifier over a target subject.•Our proposal outperforms to other D-UDA methods on two imagined speech datasets. Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02±08.14% and 62.99±04.78%, assessed using leave-one-out cross-validation.
doi_str_mv 10.1016/j.patrec.2020.11.013
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2493528678</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167865520304244</els_id><sourcerecordid>2493528678</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKv_wEPA89Z8bPbjIkipVRA8qOeQJpM2S7tZk1RQ8L-bdj17CpN533dmHoSuKZlRQqvbbjaoFEDPGGH5i84I5SdoQpuaFTUvy1M0ybK6aCohztFFjB0hpOJtM0E_r0n1RgXjvlVyvi8CWNfDDvqEjd8p12Nl1JCOTbyDtPEGWx-wDj7GIu5XHeiEF4tlsVIRDNZbFaOzTo-O7Hc7tc6RBscBQG9wXtSve3doX6Izq7YRrv7eKXp_WLzNH4vnl-XT_P650JyXqeArArQkpFSaU2opL0VVW6hXTAkmSNtWkCvBG0sbI0BxXRnLhBUaWsoqzafoZswdgv_YQ0yy8_vQ55GSlS0XrKnqJqvKUXW8LYOQQ8jLhy9JiTyAlp0cQcsDaEmpzKCz7W60Qb7g00GQUTvoNRiXpUka7_4P-AWjJYst</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2493528678</pqid></control><display><type>article</type><title>Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Jiménez-Guarneros, Magdiel ; Gómez-Gil, Pilar</creator><creatorcontrib>Jiménez-Guarneros, Magdiel ; Gómez-Gil, Pilar</creatorcontrib><description>•We proposed a D-UDA method for cross-subject EEG-based imagined speech recognition.•A novel loss is introduced to refine decision boundaries from target subject data.•The proposed method may build an effective classifier over a target subject.•Our proposal outperforms to other D-UDA methods on two imagined speech datasets. Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02±08.14% and 62.99±04.78%, assessed using leave-one-out cross-validation.</description><identifier>ISSN: 0167-8655</identifier><identifier>EISSN: 1872-7344</identifier><identifier>DOI: 10.1016/j.patrec.2020.11.013</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Adaptation ; Classification ; Classifiers ; Deep learning ; Disabilities ; Divergence ; Domains ; EEG ; Electroencephalography ; Imagined speech ; Machine learning ; Speech ; Speech recognition ; Speeches ; Standardization ; Unsupervised domain adaptation ; Voice recognition</subject><ispartof>Pattern recognition letters, 2021-01, Vol.141, p.54-60</ispartof><rights>2020 Elsevier B.V.</rights><rights>Copyright Elsevier Science Ltd. Jan 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3</citedby><cites>FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Jiménez-Guarneros, Magdiel</creatorcontrib><creatorcontrib>Gómez-Gil, Pilar</creatorcontrib><title>Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition</title><title>Pattern recognition letters</title><description>•We proposed a D-UDA method for cross-subject EEG-based imagined speech recognition.•A novel loss is introduced to refine decision boundaries from target subject data.•The proposed method may build an effective classifier over a target subject.•Our proposal outperforms to other D-UDA methods on two imagined speech datasets. Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02±08.14% and 62.99±04.78%, assessed using leave-one-out cross-validation.</description><subject>Adaptation</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Deep learning</subject><subject>Disabilities</subject><subject>Divergence</subject><subject>Domains</subject><subject>EEG</subject><subject>Electroencephalography</subject><subject>Imagined speech</subject><subject>Machine learning</subject><subject>Speech</subject><subject>Speech recognition</subject><subject>Speeches</subject><subject>Standardization</subject><subject>Unsupervised domain adaptation</subject><subject>Voice recognition</subject><issn>0167-8655</issn><issn>1872-7344</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKv_wEPA89Z8bPbjIkipVRA8qOeQJpM2S7tZk1RQ8L-bdj17CpN533dmHoSuKZlRQqvbbjaoFEDPGGH5i84I5SdoQpuaFTUvy1M0ybK6aCohztFFjB0hpOJtM0E_r0n1RgXjvlVyvi8CWNfDDvqEjd8p12Nl1JCOTbyDtPEGWx-wDj7GIu5XHeiEF4tlsVIRDNZbFaOzTo-O7Hc7tc6RBscBQG9wXtSve3doX6Izq7YRrv7eKXp_WLzNH4vnl-XT_P650JyXqeArArQkpFSaU2opL0VVW6hXTAkmSNtWkCvBG0sbI0BxXRnLhBUaWsoqzafoZswdgv_YQ0yy8_vQ55GSlS0XrKnqJqvKUXW8LYOQQ8jLhy9JiTyAlp0cQcsDaEmpzKCz7W60Qb7g00GQUTvoNRiXpUka7_4P-AWjJYst</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Jiménez-Guarneros, Magdiel</creator><creator>Gómez-Gil, Pilar</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TK</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202101</creationdate><title>Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition</title><author>Jiménez-Guarneros, Magdiel ; Gómez-Gil, Pilar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Deep learning</topic><topic>Disabilities</topic><topic>Divergence</topic><topic>Domains</topic><topic>EEG</topic><topic>Electroencephalography</topic><topic>Imagined speech</topic><topic>Machine learning</topic><topic>Speech</topic><topic>Speech recognition</topic><topic>Speeches</topic><topic>Standardization</topic><topic>Unsupervised domain adaptation</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiménez-Guarneros, Magdiel</creatorcontrib><creatorcontrib>Gómez-Gil, Pilar</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Pattern recognition letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jiménez-Guarneros, Magdiel</au><au>Gómez-Gil, Pilar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition</atitle><jtitle>Pattern recognition letters</jtitle><date>2021-01</date><risdate>2021</risdate><volume>141</volume><spage>54</spage><epage>60</epage><pages>54-60</pages><issn>0167-8655</issn><eissn>1872-7344</eissn><abstract>•We proposed a D-UDA method for cross-subject EEG-based imagined speech recognition.•A novel loss is introduced to refine decision boundaries from target subject data.•The proposed method may build an effective classifier over a target subject.•Our proposal outperforms to other D-UDA methods on two imagined speech datasets. Recent advances in imagined speech recognition from EEG signals have shown their capability of enabling a new natural form of communication, which is posed to improve the lives of subjects with motor disabilities. However, differences among subjects may be an obstacle to the applicability of a previously trained classifier to new users, since a significant amount of labeled samples must be acquired for each new user, making this process tedious and time-consuming. In this sense, unsupervised domain adaptation (UDA) methods, especially those based on deep learning (D-UDA), arise as a potential solution to address this issue by reducing the differences among feature distributions of subjects. It has been shown that the divergence in the marginal and conditional distributions must be reduced to encourage similar feature distributions. However, current D-UDA methods may become sensitive under adaptation scenarios where a low discriminative feature space among classes is given, reducing the accuracy performance of the classifier. To address this issue, we introduce a D-UDA method, named Standardization-Refinement Domain Adaptation (SRDA), which combines Adaptive Batch Normalization (AdaBN) with a novel loss function based on the variation of information (VOI), in order to build an adaptive classifier on EEG data corresponding to imagined speech. Our proposal, applied over two imagined speech datasets, resulted in SRDA outperforming standard classifiers for BCI and existing D-UDA methods, achieving accuracy performances of 61.02±08.14% and 62.99±04.78%, assessed using leave-one-out cross-validation.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.patrec.2020.11.013</doi><tpages>7</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0167-8655
ispartof Pattern recognition letters, 2021-01, Vol.141, p.54-60
issn 0167-8655
1872-7344
language eng
recordid cdi_proquest_journals_2493528678
source ScienceDirect Freedom Collection 2022-2024
subjects Adaptation
Classification
Classifiers
Deep learning
Disabilities
Divergence
Domains
EEG
Electroencephalography
Imagined speech
Machine learning
Speech
Speech recognition
Speeches
Standardization
Unsupervised domain adaptation
Voice recognition
title Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T13%3A36%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Standardization-refinement%20domain%20adaptation%20method%20for%20cross-subject%20EEG-based%20classification%20in%20imagined%20speech%20recognition&rft.jtitle=Pattern%20recognition%20letters&rft.au=Jim%C3%A9nez-Guarneros,%20Magdiel&rft.date=2021-01&rft.volume=141&rft.spage=54&rft.epage=60&rft.pages=54-60&rft.issn=0167-8655&rft.eissn=1872-7344&rft_id=info:doi/10.1016/j.patrec.2020.11.013&rft_dat=%3Cproquest_cross%3E2493528678%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c334t-3b0e14004ac311f134567fe7b2a5250996efe7538f18d5ea3c6df25f5ce9126c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2493528678&rft_id=info:pmid/&rfr_iscdi=true