Loading…

Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys

Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are relative contributions of reward associations and rule-based infer...

Full description

Saved in:
Bibliographic Details
Published in:Journal of cognitive neuroscience 2022-03, Vol.34 (4), p.592-604
Main Authors: Ferhat, Allain-Thibeault, Jensen, Greg, Terrace, Herbert S, Ferrera, Vincent P
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c459t-c8cbf3b059ff966cb10f11166f4f3c0fefd0f7c289afab37a386f5a3a691cbab3
container_end_page 604
container_issue 4
container_start_page 592
container_title Journal of cognitive neuroscience
container_volume 34
creator Ferhat, Allain-Thibeault
Jensen, Greg
Terrace, Herbert S
Ferrera, Vincent P
description Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are relative contributions of reward associations and rule-based inference. To explore these issues, we created a conflict between rule- and reward-based learning during a serial ordering task. Rhesus macaques learned two lists, each containing five stimuli that were trained exclusively with adjacent pairs. Selection of the higher-ranked item resulted in rewards. “Small reward” lists yielded two drops of fluid reward, whereas “large reward” lists yielded five drops. Following training of adjacent pairs, monkeys were tested on novels pairs. One item was selected from each list, such that a ranking rule could conflict with preferences for large rewards. Differences between the corresponding reward magnitudes had a strong influence on accuracy, but we also observed a symbolic distance effect. That provided evidence of a rule-based influence on decisions. RT comparisons suggested a conflict between rule- and reward-based processes. We conclude that performance reflects the contributions of two strategies and that a model-based strategy is employed in the face of a strong countervailing reward incentive.
doi_str_mv 10.1162/jocn_a_01823
format article
fullrecord <record><control><sourceid>proquest_mit_j</sourceid><recordid>TN_cdi_mit_journals_10_1162_jocn_a_01823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2622275144</sourcerecordid><originalsourceid>FETCH-LOGICAL-c459t-c8cbf3b059ff966cb10f11166f4f3c0fefd0f7c289afab37a386f5a3a691cbab3</originalsourceid><addsrcrecordid>eNp1kc1rVDEUxYNY7Di6cy0BNy58NV8vL9lIpfhRaCm0iu5CXt5NzfgmGZN5lfGvN0M_nIpuciH53ZNzOAg9o-SAUsleL5KLxhpCFeMP0Iy2nDRKafUQzUgdjWb66z56XMqCEMJaKR6hfd4SSQlTM_TlOPpxgugAJ4_PpxEabOOAz-GnzUPT2wIDvlhnu4bLAAWniOsG5O1G2a5cQA52xGd5gIz7DT5N8TtsyhO05-1Y4OnNnKPP7999OvrYnJx9OD56e9I40ep145TrPe9Jq73XUrqeEk9rLOmF54548APxnWNKW2973lmupG8tt1JT19ebOXpzrbua-iUMDmL1OppVDkubNybZYO6_xPDNXKYrozTXvB5z9PJGIKcfE5S1WYbiYBxthDQVwyRjrGupEBV98Re6SFOONZ6pBkVHO9XKSr26plxOpWTwd2YoMdvGzG5jFX--G-AOvq3oj8Fl2PnwP1qH_0C3yBUXQRhOORHcMMKYIdwQZX6F1X2J3wdYteY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2894717856</pqid></control><display><type>article</type><title>Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys</title><source>MIT Press Journals</source><creator>Ferhat, Allain-Thibeault ; Jensen, Greg ; Terrace, Herbert S ; Ferrera, Vincent P</creator><creatorcontrib>Ferhat, Allain-Thibeault ; Jensen, Greg ; Terrace, Herbert S ; Ferrera, Vincent P</creatorcontrib><description>Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are relative contributions of reward associations and rule-based inference. To explore these issues, we created a conflict between rule- and reward-based learning during a serial ordering task. Rhesus macaques learned two lists, each containing five stimuli that were trained exclusively with adjacent pairs. Selection of the higher-ranked item resulted in rewards. “Small reward” lists yielded two drops of fluid reward, whereas “large reward” lists yielded five drops. Following training of adjacent pairs, monkeys were tested on novels pairs. One item was selected from each list, such that a ranking rule could conflict with preferences for large rewards. Differences between the corresponding reward magnitudes had a strong influence on accuracy, but we also observed a symbolic distance effect. That provided evidence of a rule-based influence on decisions. RT comparisons suggested a conflict between rule- and reward-based processes. We conclude that performance reflects the contributions of two strategies and that a model-based strategy is employed in the face of a strong countervailing reward incentive.</description><identifier>ISSN: 0898-929X</identifier><identifier>ISSN: 1530-8898</identifier><identifier>EISSN: 1530-8898</identifier><identifier>DOI: 10.1162/jocn_a_01823</identifier><identifier>PMID: 35061028</identifier><language>eng</language><publisher>One Broadway, 12th Floor, Cambridge, Massachusetts 02142, USA: MIT Press</publisher><subject>Animals ; Cognitive ability ; Humans ; Knowledge ; Learning ; Macaca mulatta - psychology ; Motivation ; Reinforcement ; Review ; Reward</subject><ispartof>Journal of cognitive neuroscience, 2022-03, Vol.34 (4), p.592-604</ispartof><rights>2022 Massachusetts Institute of Technology.</rights><rights>Copyright MIT Press Journals, The 2022</rights><rights>2022 Massachusetts Institute of Technology 2022 Massachusetts Institute of Technology</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c459t-c8cbf3b059ff966cb10f11166f4f3c0fefd0f7c289afab37a386f5a3a691cbab3</cites><orcidid>0000-0002-2279-6674</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://direct.mit.edu/jocn/article/doi/10.1162/jocn_a_01823$$EHTML$$P50$$Gmit$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,54009,54010</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35061028$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ferhat, Allain-Thibeault</creatorcontrib><creatorcontrib>Jensen, Greg</creatorcontrib><creatorcontrib>Terrace, Herbert S</creatorcontrib><creatorcontrib>Ferrera, Vincent P</creatorcontrib><title>Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys</title><title>Journal of cognitive neuroscience</title><addtitle>J Cogn Neurosci</addtitle><description>Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are relative contributions of reward associations and rule-based inference. To explore these issues, we created a conflict between rule- and reward-based learning during a serial ordering task. Rhesus macaques learned two lists, each containing five stimuli that were trained exclusively with adjacent pairs. Selection of the higher-ranked item resulted in rewards. “Small reward” lists yielded two drops of fluid reward, whereas “large reward” lists yielded five drops. Following training of adjacent pairs, monkeys were tested on novels pairs. One item was selected from each list, such that a ranking rule could conflict with preferences for large rewards. Differences between the corresponding reward magnitudes had a strong influence on accuracy, but we also observed a symbolic distance effect. That provided evidence of a rule-based influence on decisions. RT comparisons suggested a conflict between rule- and reward-based processes. We conclude that performance reflects the contributions of two strategies and that a model-based strategy is employed in the face of a strong countervailing reward incentive.</description><subject>Animals</subject><subject>Cognitive ability</subject><subject>Humans</subject><subject>Knowledge</subject><subject>Learning</subject><subject>Macaca mulatta - psychology</subject><subject>Motivation</subject><subject>Reinforcement</subject><subject>Review</subject><subject>Reward</subject><issn>0898-929X</issn><issn>1530-8898</issn><issn>1530-8898</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp1kc1rVDEUxYNY7Di6cy0BNy58NV8vL9lIpfhRaCm0iu5CXt5NzfgmGZN5lfGvN0M_nIpuciH53ZNzOAg9o-SAUsleL5KLxhpCFeMP0Iy2nDRKafUQzUgdjWb66z56XMqCEMJaKR6hfd4SSQlTM_TlOPpxgugAJ4_PpxEabOOAz-GnzUPT2wIDvlhnu4bLAAWniOsG5O1G2a5cQA52xGd5gIz7DT5N8TtsyhO05-1Y4OnNnKPP7999OvrYnJx9OD56e9I40ep145TrPe9Jq73XUrqeEk9rLOmF54548APxnWNKW2973lmupG8tt1JT19ebOXpzrbua-iUMDmL1OppVDkubNybZYO6_xPDNXKYrozTXvB5z9PJGIKcfE5S1WYbiYBxthDQVwyRjrGupEBV98Re6SFOONZ6pBkVHO9XKSr26plxOpWTwd2YoMdvGzG5jFX--G-AOvq3oj8Fl2PnwP1qH_0C3yBUXQRhOORHcMMKYIdwQZX6F1X2J3wdYteY</recordid><startdate>20220305</startdate><enddate>20220305</enddate><creator>Ferhat, Allain-Thibeault</creator><creator>Jensen, Greg</creator><creator>Terrace, Herbert S</creator><creator>Ferrera, Vincent P</creator><general>MIT Press</general><general>MIT Press Journals, The</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QR</scope><scope>7TK</scope><scope>8FD</scope><scope>FR3</scope><scope>K9.</scope><scope>P64</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-2279-6674</orcidid></search><sort><creationdate>20220305</creationdate><title>Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys</title><author>Ferhat, Allain-Thibeault ; Jensen, Greg ; Terrace, Herbert S ; Ferrera, Vincent P</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c459t-c8cbf3b059ff966cb10f11166f4f3c0fefd0f7c289afab37a386f5a3a691cbab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Animals</topic><topic>Cognitive ability</topic><topic>Humans</topic><topic>Knowledge</topic><topic>Learning</topic><topic>Macaca mulatta - psychology</topic><topic>Motivation</topic><topic>Reinforcement</topic><topic>Review</topic><topic>Reward</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ferhat, Allain-Thibeault</creatorcontrib><creatorcontrib>Jensen, Greg</creatorcontrib><creatorcontrib>Terrace, Herbert S</creatorcontrib><creatorcontrib>Ferrera, Vincent P</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Chemoreception Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of cognitive neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ferhat, Allain-Thibeault</au><au>Jensen, Greg</au><au>Terrace, Herbert S</au><au>Ferrera, Vincent P</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys</atitle><jtitle>Journal of cognitive neuroscience</jtitle><addtitle>J Cogn Neurosci</addtitle><date>2022-03-05</date><risdate>2022</risdate><volume>34</volume><issue>4</issue><spage>592</spage><epage>604</epage><pages>592-604</pages><issn>0898-929X</issn><issn>1530-8898</issn><eissn>1530-8898</eissn><abstract>Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are relative contributions of reward associations and rule-based inference. To explore these issues, we created a conflict between rule- and reward-based learning during a serial ordering task. Rhesus macaques learned two lists, each containing five stimuli that were trained exclusively with adjacent pairs. Selection of the higher-ranked item resulted in rewards. “Small reward” lists yielded two drops of fluid reward, whereas “large reward” lists yielded five drops. Following training of adjacent pairs, monkeys were tested on novels pairs. One item was selected from each list, such that a ranking rule could conflict with preferences for large rewards. Differences between the corresponding reward magnitudes had a strong influence on accuracy, but we also observed a symbolic distance effect. That provided evidence of a rule-based influence on decisions. RT comparisons suggested a conflict between rule- and reward-based processes. We conclude that performance reflects the contributions of two strategies and that a model-based strategy is employed in the face of a strong countervailing reward incentive.</abstract><cop>One Broadway, 12th Floor, Cambridge, Massachusetts 02142, USA</cop><pub>MIT Press</pub><pmid>35061028</pmid><doi>10.1162/jocn_a_01823</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-2279-6674</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0898-929X
ispartof Journal of cognitive neuroscience, 2022-03, Vol.34 (4), p.592-604
issn 0898-929X
1530-8898
1530-8898
language eng
recordid cdi_mit_journals_10_1162_jocn_a_01823
source MIT Press Journals
subjects Animals
Cognitive ability
Humans
Knowledge
Learning
Macaca mulatta - psychology
Motivation
Reinforcement
Review
Reward
title Influence of Rule- and Reward-based Strategies on Inferences of Serial Order by Monkeys
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T21%3A09%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_mit_j&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Influence%20of%20Rule-%20and%20Reward-based%20Strategies%20on%20Inferences%20of%20Serial%20Order%20by%20Monkeys&rft.jtitle=Journal%20of%20cognitive%20neuroscience&rft.au=Ferhat,%20Allain-Thibeault&rft.date=2022-03-05&rft.volume=34&rft.issue=4&rft.spage=592&rft.epage=604&rft.pages=592-604&rft.issn=0898-929X&rft.eissn=1530-8898&rft_id=info:doi/10.1162/jocn_a_01823&rft_dat=%3Cproquest_mit_j%3E2622275144%3C/proquest_mit_j%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c459t-c8cbf3b059ff966cb10f11166f4f3c0fefd0f7c289afab37a386f5a3a691cbab3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2894717856&rft_id=info:pmid/35061028&rfr_iscdi=true