Loading…
A Brain-Robot Interaction System by Fusing Human and Machine Intelligence
This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intell...
Saved in:
Published in: | IEEE transactions on neural systems and rehabilitation engineering 2019-03, Vol.27 (3), p.533-542 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3 |
---|---|
cites | cdi_FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3 |
container_end_page | 542 |
container_issue | 3 |
container_start_page | 533 |
container_title | IEEE transactions on neural systems and rehabilitation engineering |
container_volume | 27 |
creator | Mao, Xiaoqian Li, Wei Lei, Chengwei Jin, Jing Duan, Feng Chen, Sherry |
description | This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intelligence combining a fuzzy-logic-based image processing algorithm with multi-sensor fusion technology. A subject selects an object of interest via P300, and the classification algorithm transfers the corresponding parameters to an improved fuzzy color extractor for object extraction. A central vision tracking strategy automatically guides the NAO humanoid robot to the destination selected by the subject intentions represented by brainwaves. During this process, human supervises the system at high level, while machine intelligence assists the robot in accomplishing tasks by analyzing image feeding back from the camera, distance monitoring using out-of-gauge alarms from sonars, and collision detecting from bumper sensors. In this scenario, the SSVEP takes over the situations in which the machine intelligence cannot make decisions. The experimental results show that the subjects can control the robot to a destination of interest, with fewer commands than only using a brain-robot interface. Therefore, the fusion of human and machine intelligence greatly alleviates the brain load and enhances the robot executive efficiency of a brain-robot interaction system. |
doi_str_mv | 10.1109/TNSRE.2019.2897323 |
format | article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_journals_2196875151</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8633982</ieee_id><sourcerecordid>2179516643</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3</originalsourceid><addsrcrecordid>eNpdkMtOwzAQRS0Eorx-ACQUiQ2blLHHseMlVC1U4iG1ZR05jl2CGgfiZNG_J33AgtWMNOdejQ4hlxSGlIK6W7zOZ-MhA6qGLFUSGR6QE5okaQyMwuFmRx5zZDAgpyF8AlApEnlMBgiSCuB4Qqb30UOjSx_P6rxuo6lvbaNNW9Y-mq9Da6soX0eTLpR-GT11lfaR9kX0os1H6e0WX63KpfXGnpMjp1fBXuznGXmfjBejp_j57XE6un-ODSa0jS06BhoMBVNIJ4RKjQGXFwwSyQqWo-S5QCeVwrxwOafAOdcMEQxw5wyekdtd71dTf3c2tFlVBtO_ob2tu5AxKlVCheDYozf_0M-6a3z_XU8pkcqEJrSn2I4yTR1CY1321ZSVbtYZhWwjOtuKzjais73oPnS9r-7yyhZ_kV-zPXC1A0pr7d85FYgqZfgDcJKADQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2196875151</pqid></control><display><type>article</type><title>A Brain-Robot Interaction System by Fusing Human and Machine Intelligence</title><source>Alma/SFX Local Collection</source><creator>Mao, Xiaoqian ; Li, Wei ; Lei, Chengwei ; Jin, Jing ; Duan, Feng ; Chen, Sherry</creator><creatorcontrib>Mao, Xiaoqian ; Li, Wei ; Lei, Chengwei ; Jin, Jing ; Duan, Feng ; Chen, Sherry</creatorcontrib><description>This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intelligence combining a fuzzy-logic-based image processing algorithm with multi-sensor fusion technology. A subject selects an object of interest via P300, and the classification algorithm transfers the corresponding parameters to an improved fuzzy color extractor for object extraction. A central vision tracking strategy automatically guides the NAO humanoid robot to the destination selected by the subject intentions represented by brainwaves. During this process, human supervises the system at high level, while machine intelligence assists the robot in accomplishing tasks by analyzing image feeding back from the camera, distance monitoring using out-of-gauge alarms from sonars, and collision detecting from bumper sensors. In this scenario, the SSVEP takes over the situations in which the machine intelligence cannot make decisions. The experimental results show that the subjects can control the robot to a destination of interest, with fewer commands than only using a brain-robot interface. Therefore, the fusion of human and machine intelligence greatly alleviates the brain load and enhances the robot executive efficiency of a brain-robot interaction system.</description><identifier>ISSN: 1534-4320</identifier><identifier>EISSN: 1558-0210</identifier><identifier>DOI: 10.1109/TNSRE.2019.2897323</identifier><identifier>PMID: 30716043</identifier><identifier>CODEN: ITNSB3</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Artificial Intelligence ; Automation ; Brain ; Brain robot interaction ; Brain-Computer Interfaces ; Cameras ; Electroencephalography ; Event-related potentials ; Event-Related Potentials, P300 - physiology ; Evoked Potentials, Somatosensory - physiology ; Fuzzy Logic ; human intelligence ; Humanoid ; Humanoid robots ; Humans ; Hybrid systems ; Image processing ; Image Processing, Computer-Assisted ; improved fuzzy color extractor (IFCE) ; Intelligence ; Machine intelligence ; multi-sensor fusion ; Multisensor fusion ; Robot control ; Robot sensing systems ; Robotics - methods ; Robots ; Task analysis ; Vision, Ocular - physiology ; Visual evoked potentials</subject><ispartof>IEEE transactions on neural systems and rehabilitation engineering, 2019-03, Vol.27 (3), p.533-542</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3</citedby><cites>FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3</cites><orcidid>0000-0002-2179-2460 ; 0000-0003-1418-0201 ; 0000-0002-7332-0537 ; 0000-0003-1778-2523 ; 0000-0001-5559-7295</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30716043$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Mao, Xiaoqian</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Lei, Chengwei</creatorcontrib><creatorcontrib>Jin, Jing</creatorcontrib><creatorcontrib>Duan, Feng</creatorcontrib><creatorcontrib>Chen, Sherry</creatorcontrib><title>A Brain-Robot Interaction System by Fusing Human and Machine Intelligence</title><title>IEEE transactions on neural systems and rehabilitation engineering</title><addtitle>TNSRE</addtitle><addtitle>IEEE Trans Neural Syst Rehabil Eng</addtitle><description>This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intelligence combining a fuzzy-logic-based image processing algorithm with multi-sensor fusion technology. A subject selects an object of interest via P300, and the classification algorithm transfers the corresponding parameters to an improved fuzzy color extractor for object extraction. A central vision tracking strategy automatically guides the NAO humanoid robot to the destination selected by the subject intentions represented by brainwaves. During this process, human supervises the system at high level, while machine intelligence assists the robot in accomplishing tasks by analyzing image feeding back from the camera, distance monitoring using out-of-gauge alarms from sonars, and collision detecting from bumper sensors. In this scenario, the SSVEP takes over the situations in which the machine intelligence cannot make decisions. The experimental results show that the subjects can control the robot to a destination of interest, with fewer commands than only using a brain-robot interface. Therefore, the fusion of human and machine intelligence greatly alleviates the brain load and enhances the robot executive efficiency of a brain-robot interaction system.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Automation</subject><subject>Brain</subject><subject>Brain robot interaction</subject><subject>Brain-Computer Interfaces</subject><subject>Cameras</subject><subject>Electroencephalography</subject><subject>Event-related potentials</subject><subject>Event-Related Potentials, P300 - physiology</subject><subject>Evoked Potentials, Somatosensory - physiology</subject><subject>Fuzzy Logic</subject><subject>human intelligence</subject><subject>Humanoid</subject><subject>Humanoid robots</subject><subject>Humans</subject><subject>Hybrid systems</subject><subject>Image processing</subject><subject>Image Processing, Computer-Assisted</subject><subject>improved fuzzy color extractor (IFCE)</subject><subject>Intelligence</subject><subject>Machine intelligence</subject><subject>multi-sensor fusion</subject><subject>Multisensor fusion</subject><subject>Robot control</subject><subject>Robot sensing systems</subject><subject>Robotics - methods</subject><subject>Robots</subject><subject>Task analysis</subject><subject>Vision, Ocular - physiology</subject><subject>Visual evoked potentials</subject><issn>1534-4320</issn><issn>1558-0210</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNpdkMtOwzAQRS0Eorx-ACQUiQ2blLHHseMlVC1U4iG1ZR05jl2CGgfiZNG_J33AgtWMNOdejQ4hlxSGlIK6W7zOZ-MhA6qGLFUSGR6QE5okaQyMwuFmRx5zZDAgpyF8AlApEnlMBgiSCuB4Qqb30UOjSx_P6rxuo6lvbaNNW9Y-mq9Da6soX0eTLpR-GT11lfaR9kX0os1H6e0WX63KpfXGnpMjp1fBXuznGXmfjBejp_j57XE6un-ODSa0jS06BhoMBVNIJ4RKjQGXFwwSyQqWo-S5QCeVwrxwOafAOdcMEQxw5wyekdtd71dTf3c2tFlVBtO_ob2tu5AxKlVCheDYozf_0M-6a3z_XU8pkcqEJrSn2I4yTR1CY1321ZSVbtYZhWwjOtuKzjais73oPnS9r-7yyhZ_kV-zPXC1A0pr7d85FYgqZfgDcJKADQ</recordid><startdate>20190301</startdate><enddate>20190301</enddate><creator>Mao, Xiaoqian</creator><creator>Li, Wei</creator><creator>Lei, Chengwei</creator><creator>Jin, Jing</creator><creator>Duan, Feng</creator><creator>Chen, Sherry</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-2179-2460</orcidid><orcidid>https://orcid.org/0000-0003-1418-0201</orcidid><orcidid>https://orcid.org/0000-0002-7332-0537</orcidid><orcidid>https://orcid.org/0000-0003-1778-2523</orcidid><orcidid>https://orcid.org/0000-0001-5559-7295</orcidid></search><sort><creationdate>20190301</creationdate><title>A Brain-Robot Interaction System by Fusing Human and Machine Intelligence</title><author>Mao, Xiaoqian ; Li, Wei ; Lei, Chengwei ; Jin, Jing ; Duan, Feng ; Chen, Sherry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Automation</topic><topic>Brain</topic><topic>Brain robot interaction</topic><topic>Brain-Computer Interfaces</topic><topic>Cameras</topic><topic>Electroencephalography</topic><topic>Event-related potentials</topic><topic>Event-Related Potentials, P300 - physiology</topic><topic>Evoked Potentials, Somatosensory - physiology</topic><topic>Fuzzy Logic</topic><topic>human intelligence</topic><topic>Humanoid</topic><topic>Humanoid robots</topic><topic>Humans</topic><topic>Hybrid systems</topic><topic>Image processing</topic><topic>Image Processing, Computer-Assisted</topic><topic>improved fuzzy color extractor (IFCE)</topic><topic>Intelligence</topic><topic>Machine intelligence</topic><topic>multi-sensor fusion</topic><topic>Multisensor fusion</topic><topic>Robot control</topic><topic>Robot sensing systems</topic><topic>Robotics - methods</topic><topic>Robots</topic><topic>Task analysis</topic><topic>Vision, Ocular - physiology</topic><topic>Visual evoked potentials</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mao, Xiaoqian</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Lei, Chengwei</creatorcontrib><creatorcontrib>Jin, Jing</creatorcontrib><creatorcontrib>Duan, Feng</creatorcontrib><creatorcontrib>Chen, Sherry</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on neural systems and rehabilitation engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mao, Xiaoqian</au><au>Li, Wei</au><au>Lei, Chengwei</au><au>Jin, Jing</au><au>Duan, Feng</au><au>Chen, Sherry</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Brain-Robot Interaction System by Fusing Human and Machine Intelligence</atitle><jtitle>IEEE transactions on neural systems and rehabilitation engineering</jtitle><stitle>TNSRE</stitle><addtitle>IEEE Trans Neural Syst Rehabil Eng</addtitle><date>2019-03-01</date><risdate>2019</risdate><volume>27</volume><issue>3</issue><spage>533</spage><epage>542</epage><pages>533-542</pages><issn>1534-4320</issn><eissn>1558-0210</eissn><coden>ITNSB3</coden><abstract>This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intelligence combining a fuzzy-logic-based image processing algorithm with multi-sensor fusion technology. A subject selects an object of interest via P300, and the classification algorithm transfers the corresponding parameters to an improved fuzzy color extractor for object extraction. A central vision tracking strategy automatically guides the NAO humanoid robot to the destination selected by the subject intentions represented by brainwaves. During this process, human supervises the system at high level, while machine intelligence assists the robot in accomplishing tasks by analyzing image feeding back from the camera, distance monitoring using out-of-gauge alarms from sonars, and collision detecting from bumper sensors. In this scenario, the SSVEP takes over the situations in which the machine intelligence cannot make decisions. The experimental results show that the subjects can control the robot to a destination of interest, with fewer commands than only using a brain-robot interface. Therefore, the fusion of human and machine intelligence greatly alleviates the brain load and enhances the robot executive efficiency of a brain-robot interaction system.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30716043</pmid><doi>10.1109/TNSRE.2019.2897323</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-2179-2460</orcidid><orcidid>https://orcid.org/0000-0003-1418-0201</orcidid><orcidid>https://orcid.org/0000-0002-7332-0537</orcidid><orcidid>https://orcid.org/0000-0003-1778-2523</orcidid><orcidid>https://orcid.org/0000-0001-5559-7295</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1534-4320 |
ispartof | IEEE transactions on neural systems and rehabilitation engineering, 2019-03, Vol.27 (3), p.533-542 |
issn | 1534-4320 1558-0210 |
language | eng |
recordid | cdi_proquest_journals_2196875151 |
source | Alma/SFX Local Collection |
subjects | Algorithms Artificial Intelligence Automation Brain Brain robot interaction Brain-Computer Interfaces Cameras Electroencephalography Event-related potentials Event-Related Potentials, P300 - physiology Evoked Potentials, Somatosensory - physiology Fuzzy Logic human intelligence Humanoid Humanoid robots Humans Hybrid systems Image processing Image Processing, Computer-Assisted improved fuzzy color extractor (IFCE) Intelligence Machine intelligence multi-sensor fusion Multisensor fusion Robot control Robot sensing systems Robotics - methods Robots Task analysis Vision, Ocular - physiology Visual evoked potentials |
title | A Brain-Robot Interaction System by Fusing Human and Machine Intelligence |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T06%3A45%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Brain-Robot%20Interaction%20System%20by%20Fusing%20Human%20and%20Machine%20Intelligence&rft.jtitle=IEEE%20transactions%20on%20neural%20systems%20and%20rehabilitation%20engineering&rft.au=Mao,%20Xiaoqian&rft.date=2019-03-01&rft.volume=27&rft.issue=3&rft.spage=533&rft.epage=542&rft.pages=533-542&rft.issn=1534-4320&rft.eissn=1558-0210&rft.coden=ITNSB3&rft_id=info:doi/10.1109/TNSRE.2019.2897323&rft_dat=%3Cproquest_pubme%3E2179516643%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c351t-e3f20a0c10cd7f6698cc0fbd20572d2b374b63f7993bdfb410444a2330c04ffc3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2196875151&rft_id=info:pmid/30716043&rft_ieee_id=8633982&rfr_iscdi=true |