Loading…

Tasks determine what is learned in visual statistical learning

Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modula...

Full description

Saved in:
Bibliographic Details
Published in:Psychonomic bulletin & review 2018-10, Vol.25 (5), p.1847-1854
Main Authors: Vickery, Timothy J., Park, Su Hyoun, Gupta, Jayesh, Berryhill, Marian E.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763
cites cdi_FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763
container_end_page 1854
container_issue 5
container_start_page 1847
container_title Psychonomic bulletin & review
container_volume 25
creator Vickery, Timothy J.
Park, Su Hyoun
Gupta, Jayesh
Berryhill, Marian E.
description Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modulated by selective attention and faces interference from some secondary tasks, but there is little evidence that the types of contingencies learned in VSL are sensitive to task demands. We found a powerful effect of task on what is learned in VSL. Participants first completed a visual familiarization task requiring judgments of face gender (female/male) or scene location (interior/exterior). Statistical regularities were embedded between stimulus pairs. During a surprise recognition phase, participants showed less recognition for pairs that had required a change in response key (e.g., female followed by male) or task (e.g., female followed by indoor) during familiarization. When familiarization required detection of “flicker” or “jiggle” events unrelated to image content, there was weaker, but uniform, VSL across pair types. These results suggest that simple task manipulations play a strong role in modulating the distribution of learning over different pair combinations. Such variations may arise from task and response conflict or because the manner in which images are processed is altered.
doi_str_mv 10.3758/s13423-017-1405-6
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1966992298</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2191319155</sourcerecordid><originalsourceid>FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763</originalsourceid><addsrcrecordid>eNp1kE1LxDAQhoMo7rr6A7xIwYuXar7bXARZ_ALBy3oOaTNds_ZjTVrFf2_WrgqChyED88yb4UHomOBzlon8IhDGKUsxyVLCsUjlDpoSwUgqGMW7scdSpYrlfIIOQlhhjIVUch9NqCJCZSqfosuFCS8hsdCDb1wLyfuz6RMXkhqMb8Emrk3eXBhMnYTe9C70roz919S1y0O0V5k6wNH2naGnm-vF_C59eLy9n189pCUnok8VViBtgW1mSsJFIZiijAGnWVWJwhYWiqrKCsNLLIk0FTdYWrCCSsFzkkk2Q2dj7tp3rwOEXjculFDXpoVuCJooKZWiVOURPf2DrrrBt_E6TYkiLJYQkSIjVfouBA-VXnvXGP-hCdYbuXqUq6NcvZGrN0ecbJOHogH7s_FtMwJ0BEIctUvwv1__n_oJWV-D6g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2191319155</pqid></control><display><type>article</type><title>Tasks determine what is learned in visual statistical learning</title><source>Springer Nature</source><creator>Vickery, Timothy J. ; Park, Su Hyoun ; Gupta, Jayesh ; Berryhill, Marian E.</creator><creatorcontrib>Vickery, Timothy J. ; Park, Su Hyoun ; Gupta, Jayesh ; Berryhill, Marian E.</creatorcontrib><description>Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modulated by selective attention and faces interference from some secondary tasks, but there is little evidence that the types of contingencies learned in VSL are sensitive to task demands. We found a powerful effect of task on what is learned in VSL. Participants first completed a visual familiarization task requiring judgments of face gender (female/male) or scene location (interior/exterior). Statistical regularities were embedded between stimulus pairs. During a surprise recognition phase, participants showed less recognition for pairs that had required a change in response key (e.g., female followed by male) or task (e.g., female followed by indoor) during familiarization. When familiarization required detection of “flicker” or “jiggle” events unrelated to image content, there was weaker, but uniform, VSL across pair types. These results suggest that simple task manipulations play a strong role in modulating the distribution of learning over different pair combinations. Such variations may arise from task and response conflict or because the manner in which images are processed is altered.</description><identifier>ISSN: 1069-9384</identifier><identifier>EISSN: 1531-5320</identifier><identifier>DOI: 10.3758/s13423-017-1405-6</identifier><identifier>PMID: 29159798</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Attention - physiology ; Behavioral Science and Psychology ; Brief Report ; Cognitive Psychology ; Experiments ; Humans ; Judgment ; Learning - physiology ; Neurosciences ; Pattern Recognition, Visual - physiology ; Psychology ; Spatial Learning - physiology</subject><ispartof>Psychonomic bulletin &amp; review, 2018-10, Vol.25 (5), p.1847-1854</ispartof><rights>Psychonomic Society, Inc. 2017</rights><rights>Copyright Springer Nature B.V. Oct 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763</citedby><cites>FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29159798$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Vickery, Timothy J.</creatorcontrib><creatorcontrib>Park, Su Hyoun</creatorcontrib><creatorcontrib>Gupta, Jayesh</creatorcontrib><creatorcontrib>Berryhill, Marian E.</creatorcontrib><title>Tasks determine what is learned in visual statistical learning</title><title>Psychonomic bulletin &amp; review</title><addtitle>Psychon Bull Rev</addtitle><addtitle>Psychon Bull Rev</addtitle><description>Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modulated by selective attention and faces interference from some secondary tasks, but there is little evidence that the types of contingencies learned in VSL are sensitive to task demands. We found a powerful effect of task on what is learned in VSL. Participants first completed a visual familiarization task requiring judgments of face gender (female/male) or scene location (interior/exterior). Statistical regularities were embedded between stimulus pairs. During a surprise recognition phase, participants showed less recognition for pairs that had required a change in response key (e.g., female followed by male) or task (e.g., female followed by indoor) during familiarization. When familiarization required detection of “flicker” or “jiggle” events unrelated to image content, there was weaker, but uniform, VSL across pair types. These results suggest that simple task manipulations play a strong role in modulating the distribution of learning over different pair combinations. Such variations may arise from task and response conflict or because the manner in which images are processed is altered.</description><subject>Attention - physiology</subject><subject>Behavioral Science and Psychology</subject><subject>Brief Report</subject><subject>Cognitive Psychology</subject><subject>Experiments</subject><subject>Humans</subject><subject>Judgment</subject><subject>Learning - physiology</subject><subject>Neurosciences</subject><subject>Pattern Recognition, Visual - physiology</subject><subject>Psychology</subject><subject>Spatial Learning - physiology</subject><issn>1069-9384</issn><issn>1531-5320</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp1kE1LxDAQhoMo7rr6A7xIwYuXar7bXARZ_ALBy3oOaTNds_ZjTVrFf2_WrgqChyED88yb4UHomOBzlon8IhDGKUsxyVLCsUjlDpoSwUgqGMW7scdSpYrlfIIOQlhhjIVUch9NqCJCZSqfosuFCS8hsdCDb1wLyfuz6RMXkhqMb8Emrk3eXBhMnYTe9C70roz919S1y0O0V5k6wNH2naGnm-vF_C59eLy9n189pCUnok8VViBtgW1mSsJFIZiijAGnWVWJwhYWiqrKCsNLLIk0FTdYWrCCSsFzkkk2Q2dj7tp3rwOEXjculFDXpoVuCJooKZWiVOURPf2DrrrBt_E6TYkiLJYQkSIjVfouBA-VXnvXGP-hCdYbuXqUq6NcvZGrN0ecbJOHogH7s_FtMwJ0BEIctUvwv1__n_oJWV-D6g</recordid><startdate>20181001</startdate><enddate>20181001</enddate><creator>Vickery, Timothy J.</creator><creator>Park, Su Hyoun</creator><creator>Gupta, Jayesh</creator><creator>Berryhill, Marian E.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>4T-</scope><scope>4U-</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>88G</scope><scope>8AO</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>MBDVC</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>7X8</scope></search><sort><creationdate>20181001</creationdate><title>Tasks determine what is learned in visual statistical learning</title><author>Vickery, Timothy J. ; Park, Su Hyoun ; Gupta, Jayesh ; Berryhill, Marian E.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Attention - physiology</topic><topic>Behavioral Science and Psychology</topic><topic>Brief Report</topic><topic>Cognitive Psychology</topic><topic>Experiments</topic><topic>Humans</topic><topic>Judgment</topic><topic>Learning - physiology</topic><topic>Neurosciences</topic><topic>Pattern Recognition, Visual - physiology</topic><topic>Psychology</topic><topic>Spatial Learning - physiology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Vickery, Timothy J.</creatorcontrib><creatorcontrib>Park, Su Hyoun</creatorcontrib><creatorcontrib>Gupta, Jayesh</creatorcontrib><creatorcontrib>Berryhill, Marian E.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Docstoc</collection><collection>University Readers</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>ProQuest Pharma Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Psychology Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Psychonomic bulletin &amp; review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Vickery, Timothy J.</au><au>Park, Su Hyoun</au><au>Gupta, Jayesh</au><au>Berryhill, Marian E.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Tasks determine what is learned in visual statistical learning</atitle><jtitle>Psychonomic bulletin &amp; review</jtitle><stitle>Psychon Bull Rev</stitle><addtitle>Psychon Bull Rev</addtitle><date>2018-10-01</date><risdate>2018</risdate><volume>25</volume><issue>5</issue><spage>1847</spage><epage>1854</epage><pages>1847-1854</pages><issn>1069-9384</issn><eissn>1531-5320</eissn><abstract>Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modulated by selective attention and faces interference from some secondary tasks, but there is little evidence that the types of contingencies learned in VSL are sensitive to task demands. We found a powerful effect of task on what is learned in VSL. Participants first completed a visual familiarization task requiring judgments of face gender (female/male) or scene location (interior/exterior). Statistical regularities were embedded between stimulus pairs. During a surprise recognition phase, participants showed less recognition for pairs that had required a change in response key (e.g., female followed by male) or task (e.g., female followed by indoor) during familiarization. When familiarization required detection of “flicker” or “jiggle” events unrelated to image content, there was weaker, but uniform, VSL across pair types. These results suggest that simple task manipulations play a strong role in modulating the distribution of learning over different pair combinations. Such variations may arise from task and response conflict or because the manner in which images are processed is altered.</abstract><cop>New York</cop><pub>Springer US</pub><pmid>29159798</pmid><doi>10.3758/s13423-017-1405-6</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1069-9384
ispartof Psychonomic bulletin & review, 2018-10, Vol.25 (5), p.1847-1854
issn 1069-9384
1531-5320
language eng
recordid cdi_proquest_miscellaneous_1966992298
source Springer Nature
subjects Attention - physiology
Behavioral Science and Psychology
Brief Report
Cognitive Psychology
Experiments
Humans
Judgment
Learning - physiology
Neurosciences
Pattern Recognition, Visual - physiology
Psychology
Spatial Learning - physiology
title Tasks determine what is learned in visual statistical learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T16%3A55%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Tasks%20determine%20what%20is%20learned%20in%20visual%20statistical%20learning&rft.jtitle=Psychonomic%20bulletin%20&%20review&rft.au=Vickery,%20Timothy%20J.&rft.date=2018-10-01&rft.volume=25&rft.issue=5&rft.spage=1847&rft.epage=1854&rft.pages=1847-1854&rft.issn=1069-9384&rft.eissn=1531-5320&rft_id=info:doi/10.3758/s13423-017-1405-6&rft_dat=%3Cproquest_cross%3E2191319155%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c415t-909e6db0d7ac145b539233e427ff5bdbdebff7ba4c0616af4a06ded5265481763%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2191319155&rft_id=info:pmid/29159798&rfr_iscdi=true