Loading…

What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective

Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in integrative neuroscience 2020-02, Vol.14, p.10-10
Main Authors: Fu, Di, Weber, Cornelius, Yang, Guochun, Kerzel, Matthias, Nan, Weizhi, Barros, Pablo, Wu, Haiyan, Liu, Xun, Wermter, Stefan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3
cites cdi_FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3
container_end_page 10
container_issue
container_start_page 10
container_title Frontiers in integrative neuroscience
container_volume 14
creator Fu, Di
Weber, Cornelius
Yang, Guochun
Kerzel, Matthias
Nan, Weizhi
Barros, Pablo
Wu, Haiyan
Liu, Xun
Wermter, Stefan
description Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
doi_str_mv 10.3389/fnint.2020.00010
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_806f53e9842d4bdb89967ce32cdaf5c1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_806f53e9842d4bdb89967ce32cdaf5c1</doaj_id><sourcerecordid>2377681591</sourcerecordid><originalsourceid>FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3</originalsourceid><addsrcrecordid>eNpdkktv1DAURiMEoqWwZ4UisWEzgx_xIxvQKKK00iAQULG0HOem9SixB9sZxJ4fjjMpVcvK9vXx8esripcYrSmV9dveWZfWBBG0Rghh9Kg4xZyTFcMVe3yvf1I8i3GHECeckafFCSVYVBLz0-LPjxudyka7svHjfko6We_0UH7yHQyx3IIOrjwPfiwvpjFT32AAk-wByk1K4Gb6fbkpv8LBwq8FzNRm6qw_2Dhl05Wzo-9yR7uubIKPcRl-gRD3i-t58aTXQ4QXt-1ZcXX-4Xtzsdp-_njZbLYrwxhPKyYRCCG54QzrHrVa5gpCwhBKBTXadCjfWBgMSOu6rirUkraSvM5UTyjQs-Jy8XZe79Q-2FGH38prq44FH66VDsmaAZREvGcUalmRrmq7VtZ1NgMlptM9Mzi73i2u_dSO0Jn8FkEPD6QPZ5y9Udf-oARiXAqWBW9uBcH_nCAmNdpoYBi0Az9FRagQXGJWz3u9_g_d-Snkb5opzmrCJakyhRbKzI8coL87DEZqjos6xkXNcVHHuOQlr-5f4m7Bv3zQv2mMvOY</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2365926824</pqid></control><display><type>article</type><title>What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective</title><source>Open Access: PubMed Central</source><source>Publicly Available Content Database</source><creator>Fu, Di ; Weber, Cornelius ; Yang, Guochun ; Kerzel, Matthias ; Nan, Weizhi ; Barros, Pablo ; Wu, Haiyan ; Liu, Xun ; Wermter, Stefan</creator><creatorcontrib>Fu, Di ; Weber, Cornelius ; Yang, Guochun ; Kerzel, Matthias ; Nan, Weizhi ; Barros, Pablo ; Wu, Haiyan ; Liu, Xun ; Wermter, Stefan</creatorcontrib><description>Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.</description><identifier>ISSN: 1662-5145</identifier><identifier>EISSN: 1662-5145</identifier><identifier>DOI: 10.3389/fnint.2020.00010</identifier><identifier>PMID: 32174816</identifier><language>eng</language><publisher>Switzerland: Frontiers Research Foundation</publisher><subject>Artificial intelligence ; Attention ; auditory attention ; Bias ; Cognitive ability ; computational modeling ; Computational neuroscience ; Computer science ; Control theory ; crossmodal learning ; deep learning ; Information processing ; Interdisciplinary aspects ; Nervous system ; Neuroscience ; selective attention ; Sensory integration ; visual attention ; Visual perception</subject><ispartof>Frontiers in integrative neuroscience, 2020-02, Vol.14, p.10-10</ispartof><rights>Copyright © 2020 Fu, Weber, Yang, Kerzel, Nan, Barros, Wu, Liu and Wermter.</rights><rights>2020. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>Copyright © 2020 Fu, Weber, Yang, Kerzel, Nan, Barros, Wu, Liu and Wermter. 2020 Fu, Weber, Yang, Kerzel, Nan, Barros, Wu, Liu and Wermter</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3</citedby><cites>FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2365926824/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2365926824?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,724,777,781,882,25734,27905,27906,36993,36994,44571,53772,53774,74875</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32174816$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Fu, Di</creatorcontrib><creatorcontrib>Weber, Cornelius</creatorcontrib><creatorcontrib>Yang, Guochun</creatorcontrib><creatorcontrib>Kerzel, Matthias</creatorcontrib><creatorcontrib>Nan, Weizhi</creatorcontrib><creatorcontrib>Barros, Pablo</creatorcontrib><creatorcontrib>Wu, Haiyan</creatorcontrib><creatorcontrib>Liu, Xun</creatorcontrib><creatorcontrib>Wermter, Stefan</creatorcontrib><title>What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective</title><title>Frontiers in integrative neuroscience</title><addtitle>Front Integr Neurosci</addtitle><description>Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.</description><subject>Artificial intelligence</subject><subject>Attention</subject><subject>auditory attention</subject><subject>Bias</subject><subject>Cognitive ability</subject><subject>computational modeling</subject><subject>Computational neuroscience</subject><subject>Computer science</subject><subject>Control theory</subject><subject>crossmodal learning</subject><subject>deep learning</subject><subject>Information processing</subject><subject>Interdisciplinary aspects</subject><subject>Nervous system</subject><subject>Neuroscience</subject><subject>selective attention</subject><subject>Sensory integration</subject><subject>visual attention</subject><subject>Visual perception</subject><issn>1662-5145</issn><issn>1662-5145</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkktv1DAURiMEoqWwZ4UisWEzgx_xIxvQKKK00iAQULG0HOem9SixB9sZxJ4fjjMpVcvK9vXx8esripcYrSmV9dveWZfWBBG0Rghh9Kg4xZyTFcMVe3yvf1I8i3GHECeckafFCSVYVBLz0-LPjxudyka7svHjfko6We_0UH7yHQyx3IIOrjwPfiwvpjFT32AAk-wByk1K4Gb6fbkpv8LBwq8FzNRm6qw_2Dhl05Wzo-9yR7uubIKPcRl-gRD3i-t58aTXQ4QXt-1ZcXX-4Xtzsdp-_njZbLYrwxhPKyYRCCG54QzrHrVa5gpCwhBKBTXadCjfWBgMSOu6rirUkraSvM5UTyjQs-Jy8XZe79Q-2FGH38prq44FH66VDsmaAZREvGcUalmRrmq7VtZ1NgMlptM9Mzi73i2u_dSO0Jn8FkEPD6QPZ5y9Udf-oARiXAqWBW9uBcH_nCAmNdpoYBi0Az9FRagQXGJWz3u9_g_d-Snkb5opzmrCJakyhRbKzI8coL87DEZqjos6xkXNcVHHuOQlr-5f4m7Bv3zQv2mMvOY</recordid><startdate>20200227</startdate><enddate>20200227</enddate><creator>Fu, Di</creator><creator>Weber, Cornelius</creator><creator>Yang, Guochun</creator><creator>Kerzel, Matthias</creator><creator>Nan, Weizhi</creator><creator>Barros, Pablo</creator><creator>Wu, Haiyan</creator><creator>Liu, Xun</creator><creator>Wermter, Stefan</creator><general>Frontiers Research Foundation</general><general>Frontiers Media S.A</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7TK</scope><scope>7XB</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>LK8</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20200227</creationdate><title>What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective</title><author>Fu, Di ; Weber, Cornelius ; Yang, Guochun ; Kerzel, Matthias ; Nan, Weizhi ; Barros, Pablo ; Wu, Haiyan ; Liu, Xun ; Wermter, Stefan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial intelligence</topic><topic>Attention</topic><topic>auditory attention</topic><topic>Bias</topic><topic>Cognitive ability</topic><topic>computational modeling</topic><topic>Computational neuroscience</topic><topic>Computer science</topic><topic>Control theory</topic><topic>crossmodal learning</topic><topic>deep learning</topic><topic>Information processing</topic><topic>Interdisciplinary aspects</topic><topic>Nervous system</topic><topic>Neuroscience</topic><topic>selective attention</topic><topic>Sensory integration</topic><topic>visual attention</topic><topic>Visual perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fu, Di</creatorcontrib><creatorcontrib>Weber, Cornelius</creatorcontrib><creatorcontrib>Yang, Guochun</creatorcontrib><creatorcontrib>Kerzel, Matthias</creatorcontrib><creatorcontrib>Nan, Weizhi</creatorcontrib><creatorcontrib>Barros, Pablo</creatorcontrib><creatorcontrib>Wu, Haiyan</creatorcontrib><creatorcontrib>Liu, Xun</creatorcontrib><creatorcontrib>Wermter, Stefan</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Neurosciences Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>Biological Sciences</collection><collection>ProQuest Science Journals</collection><collection>ProQuest Biological Science Journals</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Frontiers in integrative neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fu, Di</au><au>Weber, Cornelius</au><au>Yang, Guochun</au><au>Kerzel, Matthias</au><au>Nan, Weizhi</au><au>Barros, Pablo</au><au>Wu, Haiyan</au><au>Liu, Xun</au><au>Wermter, Stefan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective</atitle><jtitle>Frontiers in integrative neuroscience</jtitle><addtitle>Front Integr Neurosci</addtitle><date>2020-02-27</date><risdate>2020</risdate><volume>14</volume><spage>10</spage><epage>10</epage><pages>10-10</pages><issn>1662-5145</issn><eissn>1662-5145</eissn><abstract>Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.</abstract><cop>Switzerland</cop><pub>Frontiers Research Foundation</pub><pmid>32174816</pmid><doi>10.3389/fnint.2020.00010</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1662-5145
ispartof Frontiers in integrative neuroscience, 2020-02, Vol.14, p.10-10
issn 1662-5145
1662-5145
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_806f53e9842d4bdb89967ce32cdaf5c1
source Open Access: PubMed Central; Publicly Available Content Database
subjects Artificial intelligence
Attention
auditory attention
Bias
Cognitive ability
computational modeling
Computational neuroscience
Computer science
Control theory
crossmodal learning
deep learning
Information processing
Interdisciplinary aspects
Nervous system
Neuroscience
selective attention
Sensory integration
visual attention
Visual perception
title What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T13%3A42%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What%20Can%20Computational%20Models%20Learn%20From%20Human%20Selective%20Attention?%20A%20Review%20From%20an%20Audiovisual%20Unimodal%20and%20Crossmodal%20Perspective&rft.jtitle=Frontiers%20in%20integrative%20neuroscience&rft.au=Fu,%20Di&rft.date=2020-02-27&rft.volume=14&rft.spage=10&rft.epage=10&rft.pages=10-10&rft.issn=1662-5145&rft.eissn=1662-5145&rft_id=info:doi/10.3389/fnint.2020.00010&rft_dat=%3Cproquest_doaj_%3E2377681591%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c556t-580e7786c651af0ba8580007c23373cacd01667c1e0aa99440b2b4869800f23e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2365926824&rft_id=info:pmid/32174816&rfr_iscdi=true