Loading…

EEG-based detection of modality-specific visual and auditory sensory processing

A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably...

Full description

Saved in:
Bibliographic Details
Published in:Journal of neural engineering 2023-02, Vol.20 (1), p.16049
Main Authors: Massaeli, Faghihe, Bagheri, Mohammad, Power, Sarah D
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3
cites cdi_FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3
container_end_page
container_issue 1
container_start_page 16049
container_title Journal of neural engineering
container_volume 20
creator Massaeli, Faghihe
Bagheri, Mohammad
Power, Sarah D
description A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably detect the mental state of interest via neural signals. Many different mental states have been investigated, including fatigue, attention and various emotions, however one of the most commonly studied states is mental workload, i.e. the amount of attentional resources required to perform a task. The emphasis of mental workload studies to date has been almost exclusively on detecting and predicting the 'level' of cognitive resources required (e.g. high vs. low), but we argue that having information regarding the specific 'type' of resources (e.g. visual or auditory) would allow the pBCI to apply more suitable adaption techniques than would be possible knowing just the overall workload level. 15 participants performed carefully designed visual and auditory tasks while electroencephalography (EEG) data was recorded. The tasks were designed to be as similar as possible to one another except for the type of attentional resources required. The tasks were performed at two different levels of demand. Using traditional machine learning algorithms, we investigated, firstly, if EEG can be used to distinguish between auditory and visual processing tasks and, secondly, what effect level of sensory processing demand has on the ability to distinguish between auditory and visual processing tasks. The results show that at the high level of demand, the auditory vs. visual processing tasks could be distinguished with an accuracy of 77.1% on average. However, in the low demand condition in this experiment, the tasks were not classified with an accuracy exceeding chance. These results support the feasibility of developing a pBCI for detecting not only the level, but also the type, of attentional resources being required of the user at a given time. Further research is required to determine if there is a threshold of demand under which the type of sensory processing cannot be detected, but even if that is the case, these results are still promising since it is the high end of demand that is of most concern in safety critical scenarios. Such a BCI could help improve safety in high risk occupations by initiating the most effective and efficient possible adaptation strategies when high
doi_str_mv 10.1088/1741-2552/acb9be
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_36749989</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2774495685</sourcerecordid><originalsourceid>FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3</originalsourceid><addsrcrecordid>eNp9kMFLwzAUh4Mobk7vnqQ3PViXtE3bHGXMKQx20XNIX14ko21q0wr7723p3Ek8vfD4fr9HPkJuGX1iNM-XLEtYGHEeLRUUosAzMj-tzk_vlM7Ilfd7SmOWCXpJZnGaJULkYk526_UmLJRHHWjsEDrr6sCZoHJalbY7hL5BsMZC8G19r8pA1TpQvbadaw-Bx9qPs2kdoPe2_rwmF0aVHm-Oc0E-Xtbvq9dwu9u8rZ63ISSMdqGINRqd5AXjkEHMhRLAEWgBAmIdRwZyhEhzE6lUpTwCSIFSkYPhShlu4gV5mHqH0189-k5W1gOWparR9V5GWZYkgqc5H1A6odA671s0smltpdqDZFSOGuXoSY7O5KRxiNwd2_uiQn0K_HobgMcJsK6Re9e39fDZ__ru_8D3NcpoiEjKUpoI2WgT_wAsEIvM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2774495685</pqid></control><display><type>article</type><title>EEG-based detection of modality-specific visual and auditory sensory processing</title><source>Institute of Physics</source><creator>Massaeli, Faghihe ; Bagheri, Mohammad ; Power, Sarah D</creator><creatorcontrib>Massaeli, Faghihe ; Bagheri, Mohammad ; Power, Sarah D</creatorcontrib><description>A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably detect the mental state of interest via neural signals. Many different mental states have been investigated, including fatigue, attention and various emotions, however one of the most commonly studied states is mental workload, i.e. the amount of attentional resources required to perform a task. The emphasis of mental workload studies to date has been almost exclusively on detecting and predicting the 'level' of cognitive resources required (e.g. high vs. low), but we argue that having information regarding the specific 'type' of resources (e.g. visual or auditory) would allow the pBCI to apply more suitable adaption techniques than would be possible knowing just the overall workload level. 15 participants performed carefully designed visual and auditory tasks while electroencephalography (EEG) data was recorded. The tasks were designed to be as similar as possible to one another except for the type of attentional resources required. The tasks were performed at two different levels of demand. Using traditional machine learning algorithms, we investigated, firstly, if EEG can be used to distinguish between auditory and visual processing tasks and, secondly, what effect level of sensory processing demand has on the ability to distinguish between auditory and visual processing tasks. The results show that at the high level of demand, the auditory vs. visual processing tasks could be distinguished with an accuracy of 77.1% on average. However, in the low demand condition in this experiment, the tasks were not classified with an accuracy exceeding chance. These results support the feasibility of developing a pBCI for detecting not only the level, but also the type, of attentional resources being required of the user at a given time. Further research is required to determine if there is a threshold of demand under which the type of sensory processing cannot be detected, but even if that is the case, these results are still promising since it is the high end of demand that is of most concern in safety critical scenarios. Such a BCI could help improve safety in high risk occupations by initiating the most effective and efficient possible adaptation strategies when high workload conditions are detected.</description><identifier>ISSN: 1741-2560</identifier><identifier>EISSN: 1741-2552</identifier><identifier>DOI: 10.1088/1741-2552/acb9be</identifier><identifier>PMID: 36749989</identifier><identifier>CODEN: JNEOBH</identifier><language>eng</language><publisher>England: IOP Publishing</publisher><subject>Attention ; Auditory Perception ; electroencephalography ; Electroencephalography - methods ; Humans ; passive brain computer interface ; visual and auditory processing ; Visual Perception ; Workload</subject><ispartof>Journal of neural engineering, 2023-02, Vol.20 (1), p.16049</ispartof><rights>2023 The Author(s). Published by IOP Publishing Ltd</rights><rights>Creative Commons Attribution license.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3</citedby><cites>FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3</cites><orcidid>0000-0001-5132-2753 ; 0000-0002-1029-2874</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36749989$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Massaeli, Faghihe</creatorcontrib><creatorcontrib>Bagheri, Mohammad</creatorcontrib><creatorcontrib>Power, Sarah D</creatorcontrib><title>EEG-based detection of modality-specific visual and auditory sensory processing</title><title>Journal of neural engineering</title><addtitle>JNE</addtitle><addtitle>J. Neural Eng</addtitle><description>A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably detect the mental state of interest via neural signals. Many different mental states have been investigated, including fatigue, attention and various emotions, however one of the most commonly studied states is mental workload, i.e. the amount of attentional resources required to perform a task. The emphasis of mental workload studies to date has been almost exclusively on detecting and predicting the 'level' of cognitive resources required (e.g. high vs. low), but we argue that having information regarding the specific 'type' of resources (e.g. visual or auditory) would allow the pBCI to apply more suitable adaption techniques than would be possible knowing just the overall workload level. 15 participants performed carefully designed visual and auditory tasks while electroencephalography (EEG) data was recorded. The tasks were designed to be as similar as possible to one another except for the type of attentional resources required. The tasks were performed at two different levels of demand. Using traditional machine learning algorithms, we investigated, firstly, if EEG can be used to distinguish between auditory and visual processing tasks and, secondly, what effect level of sensory processing demand has on the ability to distinguish between auditory and visual processing tasks. The results show that at the high level of demand, the auditory vs. visual processing tasks could be distinguished with an accuracy of 77.1% on average. However, in the low demand condition in this experiment, the tasks were not classified with an accuracy exceeding chance. These results support the feasibility of developing a pBCI for detecting not only the level, but also the type, of attentional resources being required of the user at a given time. Further research is required to determine if there is a threshold of demand under which the type of sensory processing cannot be detected, but even if that is the case, these results are still promising since it is the high end of demand that is of most concern in safety critical scenarios. Such a BCI could help improve safety in high risk occupations by initiating the most effective and efficient possible adaptation strategies when high workload conditions are detected.</description><subject>Attention</subject><subject>Auditory Perception</subject><subject>electroencephalography</subject><subject>Electroencephalography - methods</subject><subject>Humans</subject><subject>passive brain computer interface</subject><subject>visual and auditory processing</subject><subject>Visual Perception</subject><subject>Workload</subject><issn>1741-2560</issn><issn>1741-2552</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kMFLwzAUh4Mobk7vnqQ3PViXtE3bHGXMKQx20XNIX14ko21q0wr7723p3Ek8vfD4fr9HPkJuGX1iNM-XLEtYGHEeLRUUosAzMj-tzk_vlM7Ilfd7SmOWCXpJZnGaJULkYk526_UmLJRHHWjsEDrr6sCZoHJalbY7hL5BsMZC8G19r8pA1TpQvbadaw-Bx9qPs2kdoPe2_rwmF0aVHm-Oc0E-Xtbvq9dwu9u8rZ63ISSMdqGINRqd5AXjkEHMhRLAEWgBAmIdRwZyhEhzE6lUpTwCSIFSkYPhShlu4gV5mHqH0189-k5W1gOWparR9V5GWZYkgqc5H1A6odA671s0smltpdqDZFSOGuXoSY7O5KRxiNwd2_uiQn0K_HobgMcJsK6Re9e39fDZ__ru_8D3NcpoiEjKUpoI2WgT_wAsEIvM</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Massaeli, Faghihe</creator><creator>Bagheri, Mohammad</creator><creator>Power, Sarah D</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5132-2753</orcidid><orcidid>https://orcid.org/0000-0002-1029-2874</orcidid></search><sort><creationdate>20230201</creationdate><title>EEG-based detection of modality-specific visual and auditory sensory processing</title><author>Massaeli, Faghihe ; Bagheri, Mohammad ; Power, Sarah D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Attention</topic><topic>Auditory Perception</topic><topic>electroencephalography</topic><topic>Electroencephalography - methods</topic><topic>Humans</topic><topic>passive brain computer interface</topic><topic>visual and auditory processing</topic><topic>Visual Perception</topic><topic>Workload</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Massaeli, Faghihe</creatorcontrib><creatorcontrib>Bagheri, Mohammad</creatorcontrib><creatorcontrib>Power, Sarah D</creatorcontrib><collection>Open Access: IOP Publishing Free Content</collection><collection>IOPscience (Open Access)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of neural engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Massaeli, Faghihe</au><au>Bagheri, Mohammad</au><au>Power, Sarah D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EEG-based detection of modality-specific visual and auditory sensory processing</atitle><jtitle>Journal of neural engineering</jtitle><stitle>JNE</stitle><addtitle>J. Neural Eng</addtitle><date>2023-02-01</date><risdate>2023</risdate><volume>20</volume><issue>1</issue><spage>16049</spage><pages>16049-</pages><issn>1741-2560</issn><eissn>1741-2552</eissn><coden>JNEOBH</coden><abstract>A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably detect the mental state of interest via neural signals. Many different mental states have been investigated, including fatigue, attention and various emotions, however one of the most commonly studied states is mental workload, i.e. the amount of attentional resources required to perform a task. The emphasis of mental workload studies to date has been almost exclusively on detecting and predicting the 'level' of cognitive resources required (e.g. high vs. low), but we argue that having information regarding the specific 'type' of resources (e.g. visual or auditory) would allow the pBCI to apply more suitable adaption techniques than would be possible knowing just the overall workload level. 15 participants performed carefully designed visual and auditory tasks while electroencephalography (EEG) data was recorded. The tasks were designed to be as similar as possible to one another except for the type of attentional resources required. The tasks were performed at two different levels of demand. Using traditional machine learning algorithms, we investigated, firstly, if EEG can be used to distinguish between auditory and visual processing tasks and, secondly, what effect level of sensory processing demand has on the ability to distinguish between auditory and visual processing tasks. The results show that at the high level of demand, the auditory vs. visual processing tasks could be distinguished with an accuracy of 77.1% on average. However, in the low demand condition in this experiment, the tasks were not classified with an accuracy exceeding chance. These results support the feasibility of developing a pBCI for detecting not only the level, but also the type, of attentional resources being required of the user at a given time. Further research is required to determine if there is a threshold of demand under which the type of sensory processing cannot be detected, but even if that is the case, these results are still promising since it is the high end of demand that is of most concern in safety critical scenarios. Such a BCI could help improve safety in high risk occupations by initiating the most effective and efficient possible adaptation strategies when high workload conditions are detected.</abstract><cop>England</cop><pub>IOP Publishing</pub><pmid>36749989</pmid><doi>10.1088/1741-2552/acb9be</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-5132-2753</orcidid><orcidid>https://orcid.org/0000-0002-1029-2874</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1741-2560
ispartof Journal of neural engineering, 2023-02, Vol.20 (1), p.16049
issn 1741-2560
1741-2552
language eng
recordid cdi_pubmed_primary_36749989
source Institute of Physics
subjects Attention
Auditory Perception
electroencephalography
Electroencephalography - methods
Humans
passive brain computer interface
visual and auditory processing
Visual Perception
Workload
title EEG-based detection of modality-specific visual and auditory sensory processing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T01%3A16%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EEG-based%20detection%20of%20modality-specific%20visual%20and%20auditory%20sensory%20processing&rft.jtitle=Journal%20of%20neural%20engineering&rft.au=Massaeli,%20Faghihe&rft.date=2023-02-01&rft.volume=20&rft.issue=1&rft.spage=16049&rft.pages=16049-&rft.issn=1741-2560&rft.eissn=1741-2552&rft.coden=JNEOBH&rft_id=info:doi/10.1088/1741-2552/acb9be&rft_dat=%3Cproquest_pubme%3E2774495685%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c410t-93defd48b15c7c359a9c5ec0bc9c3d32fc8ec2d5f2a6a652cc6c0098cf5aaf5f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2774495685&rft_id=info:pmid/36749989&rfr_iscdi=true