Loading…

Visual search and location probability learning from variable perspectives

Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different side...

Full description

Saved in:
Bibliographic Details
Published in:Journal of vision (Charlottesville, Va.) Va.), 2013-05, Vol.13 (6), p.13-13
Main Authors: Jiang, Yuhong V, Swallow, Khena M, Capistrano, Christian G
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c380t-4ae8b7cf65cfed9344ffc5a58c2d24572dcdeea04a61b621ca1c3d1cb5bf79a83
cites
container_end_page 13
container_issue 6
container_start_page 13
container_title Journal of vision (Charlottesville, Va.)
container_volume 13
creator Jiang, Yuhong V
Swallow, Khena M
Capistrano, Christian G
description Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different sides of the table from trial to trial, changing their perspective. The high-probability locations were stable on the tabletop but variable relative to the viewer. When participants were informed of the high-probability locations, search was faster when the target was in those locations, demonstrating probability cuing. However, in the absence of explicit instructions and awareness, participants failed to acquire an attentional bias toward the high-probability locations even when the search items were displayed over an invariant natural scene. Additional experiments showed that locomotion did not interfere with incidental learning, but the lack of a consistent perspective prevented participants from acquiring probability cuing incidentally. We conclude that spatial biases toward target-rich locations are directed by two mechanisms: incidental learning and goal-driven attention. Incidental learning codes attended locations in a viewer-centered reference frame and is not updated with viewer movement. Goal-driven attention can be deployed to prioritize an environment-rich region.
doi_str_mv 10.1167/13.6.13
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1356956292</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1356956292</sourcerecordid><originalsourceid>FETCH-LOGICAL-c380t-4ae8b7cf65cfed9344ffc5a58c2d24572dcdeea04a61b621ca1c3d1cb5bf79a83</originalsourceid><addsrcrecordid>eNpNkEtLxDAUhYMozjiK_0Cy003HPJq0XcrgkwE36rbcvDSSPkzagfn3VmYUN-ceOB-Hy0HonJIlpbK4pnwpl5QfoDkVPM8KLtnhPz9DJyl9EsKIIPQYzRgvqKSMzNHTm08jBJwsRP2BoTU4dBoG37W4j50C5YMftjhMeevbd-xi1-ANRA8qWNzbmHqrB7-x6RQdOQjJnu3vAr3e3b6sHrL18_3j6madaV6SIcvBlqrQTgrtrKl4njunBYhSM8NyUTCjjbVAcpBUSUY1UM0N1UooV1RQ8gW62vVO_32NNg1145O2IUBruzHVlAtZCckqNqGXO1THLqVoXd1H30Dc1pTUP8NNcC0nmciLfemoGmv-uN-l-DdKyWlW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1356956292</pqid></control><display><type>article</type><title>Visual search and location probability learning from variable perspectives</title><source>DOAJ Directory of Open Access Journals</source><source>PubMed Central</source><creator>Jiang, Yuhong V ; Swallow, Khena M ; Capistrano, Christian G</creator><creatorcontrib>Jiang, Yuhong V ; Swallow, Khena M ; Capistrano, Christian G</creatorcontrib><description>Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different sides of the table from trial to trial, changing their perspective. The high-probability locations were stable on the tabletop but variable relative to the viewer. When participants were informed of the high-probability locations, search was faster when the target was in those locations, demonstrating probability cuing. However, in the absence of explicit instructions and awareness, participants failed to acquire an attentional bias toward the high-probability locations even when the search items were displayed over an invariant natural scene. Additional experiments showed that locomotion did not interfere with incidental learning, but the lack of a consistent perspective prevented participants from acquiring probability cuing incidentally. We conclude that spatial biases toward target-rich locations are directed by two mechanisms: incidental learning and goal-driven attention. Incidental learning codes attended locations in a viewer-centered reference frame and is not updated with viewer movement. Goal-driven attention can be deployed to prioritize an environment-rich region.</description><identifier>ISSN: 1534-7362</identifier><identifier>EISSN: 1534-7362</identifier><identifier>DOI: 10.1167/13.6.13</identifier><identifier>PMID: 23716120</identifier><language>eng</language><publisher>United States</publisher><subject>Adolescent ; Adult ; Analysis of Variance ; Attention - physiology ; Female ; Humans ; Learning - physiology ; Male ; Photic Stimulation ; Recognition (Psychology) - physiology ; Visual Perception - physiology ; Young Adult</subject><ispartof>Journal of vision (Charlottesville, Va.), 2013-05, Vol.13 (6), p.13-13</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c380t-4ae8b7cf65cfed9344ffc5a58c2d24572dcdeea04a61b621ca1c3d1cb5bf79a83</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,860,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/23716120$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Yuhong V</creatorcontrib><creatorcontrib>Swallow, Khena M</creatorcontrib><creatorcontrib>Capistrano, Christian G</creatorcontrib><title>Visual search and location probability learning from variable perspectives</title><title>Journal of vision (Charlottesville, Va.)</title><addtitle>J Vis</addtitle><description>Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different sides of the table from trial to trial, changing their perspective. The high-probability locations were stable on the tabletop but variable relative to the viewer. When participants were informed of the high-probability locations, search was faster when the target was in those locations, demonstrating probability cuing. However, in the absence of explicit instructions and awareness, participants failed to acquire an attentional bias toward the high-probability locations even when the search items were displayed over an invariant natural scene. Additional experiments showed that locomotion did not interfere with incidental learning, but the lack of a consistent perspective prevented participants from acquiring probability cuing incidentally. We conclude that spatial biases toward target-rich locations are directed by two mechanisms: incidental learning and goal-driven attention. Incidental learning codes attended locations in a viewer-centered reference frame and is not updated with viewer movement. Goal-driven attention can be deployed to prioritize an environment-rich region.</description><subject>Adolescent</subject><subject>Adult</subject><subject>Analysis of Variance</subject><subject>Attention - physiology</subject><subject>Female</subject><subject>Humans</subject><subject>Learning - physiology</subject><subject>Male</subject><subject>Photic Stimulation</subject><subject>Recognition (Psychology) - physiology</subject><subject>Visual Perception - physiology</subject><subject>Young Adult</subject><issn>1534-7362</issn><issn>1534-7362</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><recordid>eNpNkEtLxDAUhYMozjiK_0Cy003HPJq0XcrgkwE36rbcvDSSPkzagfn3VmYUN-ceOB-Hy0HonJIlpbK4pnwpl5QfoDkVPM8KLtnhPz9DJyl9EsKIIPQYzRgvqKSMzNHTm08jBJwsRP2BoTU4dBoG37W4j50C5YMftjhMeevbd-xi1-ANRA8qWNzbmHqrB7-x6RQdOQjJnu3vAr3e3b6sHrL18_3j6madaV6SIcvBlqrQTgrtrKl4njunBYhSM8NyUTCjjbVAcpBUSUY1UM0N1UooV1RQ8gW62vVO_32NNg1145O2IUBruzHVlAtZCckqNqGXO1THLqVoXd1H30Dc1pTUP8NNcC0nmciLfemoGmv-uN-l-DdKyWlW</recordid><startdate>20130528</startdate><enddate>20130528</enddate><creator>Jiang, Yuhong V</creator><creator>Swallow, Khena M</creator><creator>Capistrano, Christian G</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20130528</creationdate><title>Visual search and location probability learning from variable perspectives</title><author>Jiang, Yuhong V ; Swallow, Khena M ; Capistrano, Christian G</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c380t-4ae8b7cf65cfed9344ffc5a58c2d24572dcdeea04a61b621ca1c3d1cb5bf79a83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Adolescent</topic><topic>Adult</topic><topic>Analysis of Variance</topic><topic>Attention - physiology</topic><topic>Female</topic><topic>Humans</topic><topic>Learning - physiology</topic><topic>Male</topic><topic>Photic Stimulation</topic><topic>Recognition (Psychology) - physiology</topic><topic>Visual Perception - physiology</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Yuhong V</creatorcontrib><creatorcontrib>Swallow, Khena M</creatorcontrib><creatorcontrib>Capistrano, Christian G</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of vision (Charlottesville, Va.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jiang, Yuhong V</au><au>Swallow, Khena M</au><au>Capistrano, Christian G</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Visual search and location probability learning from variable perspectives</atitle><jtitle>Journal of vision (Charlottesville, Va.)</jtitle><addtitle>J Vis</addtitle><date>2013-05-28</date><risdate>2013</risdate><volume>13</volume><issue>6</issue><spage>13</spage><epage>13</epage><pages>13-13</pages><issn>1534-7362</issn><eissn>1534-7362</eissn><abstract>Do moving observers code attended locations relative to the external world or relative to themselves? To address this question we asked participants to conduct visual search on a tabletop. The search target was more likely to occur in some locations than others. Participants walked to different sides of the table from trial to trial, changing their perspective. The high-probability locations were stable on the tabletop but variable relative to the viewer. When participants were informed of the high-probability locations, search was faster when the target was in those locations, demonstrating probability cuing. However, in the absence of explicit instructions and awareness, participants failed to acquire an attentional bias toward the high-probability locations even when the search items were displayed over an invariant natural scene. Additional experiments showed that locomotion did not interfere with incidental learning, but the lack of a consistent perspective prevented participants from acquiring probability cuing incidentally. We conclude that spatial biases toward target-rich locations are directed by two mechanisms: incidental learning and goal-driven attention. Incidental learning codes attended locations in a viewer-centered reference frame and is not updated with viewer movement. Goal-driven attention can be deployed to prioritize an environment-rich region.</abstract><cop>United States</cop><pmid>23716120</pmid><doi>10.1167/13.6.13</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1534-7362
ispartof Journal of vision (Charlottesville, Va.), 2013-05, Vol.13 (6), p.13-13
issn 1534-7362
1534-7362
language eng
recordid cdi_proquest_miscellaneous_1356956292
source DOAJ Directory of Open Access Journals; PubMed Central
subjects Adolescent
Adult
Analysis of Variance
Attention - physiology
Female
Humans
Learning - physiology
Male
Photic Stimulation
Recognition (Psychology) - physiology
Visual Perception - physiology
Young Adult
title Visual search and location probability learning from variable perspectives
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T09%3A18%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Visual%20search%20and%20location%20probability%20learning%20from%20variable%20perspectives&rft.jtitle=Journal%20of%20vision%20(Charlottesville,%20Va.)&rft.au=Jiang,%20Yuhong%20V&rft.date=2013-05-28&rft.volume=13&rft.issue=6&rft.spage=13&rft.epage=13&rft.pages=13-13&rft.issn=1534-7362&rft.eissn=1534-7362&rft_id=info:doi/10.1167/13.6.13&rft_dat=%3Cproquest_cross%3E1356956292%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c380t-4ae8b7cf65cfed9344ffc5a58c2d24572dcdeea04a61b621ca1c3d1cb5bf79a83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1356956292&rft_id=info:pmid/23716120&rfr_iscdi=true