Loading…
Computational reconstruction of mental representations using human behavior
Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visu...
Saved in:
Published in: | Nature communications 2024-05, Vol.15 (1), p.4183-4183, Article 4183 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c492t-badd84c8ef377b92f63701b5d1c22a6b828831db13d218b765e54ab2d6ed64203 |
container_end_page | 4183 |
container_issue | 1 |
container_start_page | 4183 |
container_title | Nature communications |
container_volume | 15 |
creator | Caplette, Laurent Turk-Browne, Nicholas B. |
description | Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Revealing how the human mind represents information is a longstanding goal of cognitive science. Here, the authors develop a method to reconstruct the mental representations of multiple visual concepts using behavioral judgments. |
doi_str_mv | 10.1038/s41467-024-48114-6 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_5dbf6b5cef6c4a60baab9d9266c4aa44</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_5dbf6b5cef6c4a60baab9d9266c4aa44</doaj_id><sourcerecordid>3056670139</sourcerecordid><originalsourceid>FETCH-LOGICAL-c492t-badd84c8ef377b92f63701b5d1c22a6b828831db13d218b765e54ab2d6ed64203</originalsourceid><addsrcrecordid>eNp9UU1v1DAUtBAVrUr_AAcUiQuXFD_bcZwTQquWVlTiUs7W80d2s0rixU4q8e9xNqW0HPDFz2_mjT0eQt4BvQTK1ackQMi6pEyUQgGIUr4iZ4wKKKFm_PWz-pRcpLSnefEGlBBvyClXtaRcwBn5tgnDYZ5w6sKIfRG9DWOa4myXRhHaYvDjdAQO0aelXoBUzKkbt8VuHnAsjN_hQxfiW3LSYp_8xeN-Tn5cX91vbsq7719vN1_uSisaNpUGnVPCKt_yujYNayWvKZjKgWUMpVFMKQ7OAHcMlKll5SuBhjnpnRSM8nNyu-q6gHt9iN2A8ZcO2OljI8Stxjh1tve6cqaVprK-lVagpAbRNK5hcjmiEFnr86p1mM3gnc0OI_YvRF8iY7fT2_CgAYCCECorfHxUiOHn7NOkhy5Z3_c4-jAnzWklZTbIm0z98A91H-aY_31l0ToHKzOLrSwbQ0rRt0-vAaqX7PWavc7Z62P2ehl6_9zH08ifpDOBr4SUoXHr49-7_yP7G74Fu2s</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3056070386</pqid></control><display><type>article</type><title>Computational reconstruction of mental representations using human behavior</title><source>Publicly Available Content Database</source><source>Nature</source><source>PubMed Central</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Caplette, Laurent ; Turk-Browne, Nicholas B.</creator><creatorcontrib>Caplette, Laurent ; Turk-Browne, Nicholas B.</creatorcontrib><description>Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Revealing how the human mind represents information is a longstanding goal of cognitive science. Here, the authors develop a method to reconstruct the mental representations of multiple visual concepts using behavioral judgments.</description><identifier>ISSN: 2041-1723</identifier><identifier>EISSN: 2041-1723</identifier><identifier>DOI: 10.1038/s41467-024-48114-6</identifier><identifier>PMID: 38760341</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>631/378/2613/2616 ; 631/477/2811 ; Adult ; Artificial neural networks ; Behavior ; Cognition - physiology ; Cognitive science ; Female ; Human behavior ; Humanities and Social Sciences ; Humans ; Male ; multidisciplinary ; Neural networks ; Neural Networks, Computer ; Photic Stimulation - methods ; Representations ; Science ; Science (multidisciplinary) ; Semantics ; Visual Perception - physiology ; Young Adult</subject><ispartof>Nature communications, 2024-05, Vol.15 (1), p.4183-4183, Article 4183</ispartof><rights>The Author(s) 2024</rights><rights>2024. The Author(s).</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c492t-badd84c8ef377b92f63701b5d1c22a6b828831db13d218b765e54ab2d6ed64203</cites><orcidid>0000-0001-7519-3001 ; 0000-0002-7997-5729</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3056070386/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3056070386?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38760341$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Caplette, Laurent</creatorcontrib><creatorcontrib>Turk-Browne, Nicholas B.</creatorcontrib><title>Computational reconstruction of mental representations using human behavior</title><title>Nature communications</title><addtitle>Nat Commun</addtitle><addtitle>Nat Commun</addtitle><description>Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Revealing how the human mind represents information is a longstanding goal of cognitive science. Here, the authors develop a method to reconstruct the mental representations of multiple visual concepts using behavioral judgments.</description><subject>631/378/2613/2616</subject><subject>631/477/2811</subject><subject>Adult</subject><subject>Artificial neural networks</subject><subject>Behavior</subject><subject>Cognition - physiology</subject><subject>Cognitive science</subject><subject>Female</subject><subject>Human behavior</subject><subject>Humanities and Social Sciences</subject><subject>Humans</subject><subject>Male</subject><subject>multidisciplinary</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Photic Stimulation - methods</subject><subject>Representations</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Semantics</subject><subject>Visual Perception - physiology</subject><subject>Young Adult</subject><issn>2041-1723</issn><issn>2041-1723</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9UU1v1DAUtBAVrUr_AAcUiQuXFD_bcZwTQquWVlTiUs7W80d2s0rixU4q8e9xNqW0HPDFz2_mjT0eQt4BvQTK1ackQMi6pEyUQgGIUr4iZ4wKKKFm_PWz-pRcpLSnefEGlBBvyClXtaRcwBn5tgnDYZ5w6sKIfRG9DWOa4myXRhHaYvDjdAQO0aelXoBUzKkbt8VuHnAsjN_hQxfiW3LSYp_8xeN-Tn5cX91vbsq7719vN1_uSisaNpUGnVPCKt_yujYNayWvKZjKgWUMpVFMKQ7OAHcMlKll5SuBhjnpnRSM8nNyu-q6gHt9iN2A8ZcO2OljI8Stxjh1tve6cqaVprK-lVagpAbRNK5hcjmiEFnr86p1mM3gnc0OI_YvRF8iY7fT2_CgAYCCECorfHxUiOHn7NOkhy5Z3_c4-jAnzWklZTbIm0z98A91H-aY_31l0ToHKzOLrSwbQ0rRt0-vAaqX7PWavc7Z62P2ehl6_9zH08ifpDOBr4SUoXHr49-7_yP7G74Fu2s</recordid><startdate>20240517</startdate><enddate>20240517</enddate><creator>Caplette, Laurent</creator><creator>Turk-Browne, Nicholas B.</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QL</scope><scope>7QP</scope><scope>7QR</scope><scope>7SN</scope><scope>7SS</scope><scope>7ST</scope><scope>7T5</scope><scope>7T7</scope><scope>7TM</scope><scope>7TO</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>RC3</scope><scope>SOI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-7519-3001</orcidid><orcidid>https://orcid.org/0000-0002-7997-5729</orcidid></search><sort><creationdate>20240517</creationdate><title>Computational reconstruction of mental representations using human behavior</title><author>Caplette, Laurent ; Turk-Browne, Nicholas B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c492t-badd84c8ef377b92f63701b5d1c22a6b828831db13d218b765e54ab2d6ed64203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>631/378/2613/2616</topic><topic>631/477/2811</topic><topic>Adult</topic><topic>Artificial neural networks</topic><topic>Behavior</topic><topic>Cognition - physiology</topic><topic>Cognitive science</topic><topic>Female</topic><topic>Human behavior</topic><topic>Humanities and Social Sciences</topic><topic>Humans</topic><topic>Male</topic><topic>multidisciplinary</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Photic Stimulation - methods</topic><topic>Representations</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Semantics</topic><topic>Visual Perception - physiology</topic><topic>Young Adult</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Caplette, Laurent</creatorcontrib><creatorcontrib>Turk-Browne, Nicholas B.</creatorcontrib><collection>SpringerOpen</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Environment Abstracts</collection><collection>Immunology Abstracts</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Nucleic Acids Abstracts</collection><collection>Oncogenes and Growth Factors Abstracts</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Genetics Abstracts</collection><collection>Environment Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Nature communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Caplette, Laurent</au><au>Turk-Browne, Nicholas B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Computational reconstruction of mental representations using human behavior</atitle><jtitle>Nature communications</jtitle><stitle>Nat Commun</stitle><addtitle>Nat Commun</addtitle><date>2024-05-17</date><risdate>2024</risdate><volume>15</volume><issue>1</issue><spage>4183</spage><epage>4183</epage><pages>4183-4183</pages><artnum>4183</artnum><issn>2041-1723</issn><eissn>2041-1723</eissn><abstract>Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Revealing how the human mind represents information is a longstanding goal of cognitive science. Here, the authors develop a method to reconstruct the mental representations of multiple visual concepts using behavioral judgments.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>38760341</pmid><doi>10.1038/s41467-024-48114-6</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-7519-3001</orcidid><orcidid>https://orcid.org/0000-0002-7997-5729</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2041-1723 |
ispartof | Nature communications, 2024-05, Vol.15 (1), p.4183-4183, Article 4183 |
issn | 2041-1723 2041-1723 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_5dbf6b5cef6c4a60baab9d9266c4aa44 |
source | Publicly Available Content Database; Nature; PubMed Central; Springer Nature - nature.com Journals - Fully Open Access |
subjects | 631/378/2613/2616 631/477/2811 Adult Artificial neural networks Behavior Cognition - physiology Cognitive science Female Human behavior Humanities and Social Sciences Humans Male multidisciplinary Neural networks Neural Networks, Computer Photic Stimulation - methods Representations Science Science (multidisciplinary) Semantics Visual Perception - physiology Young Adult |
title | Computational reconstruction of mental representations using human behavior |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A56%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Computational%20reconstruction%20of%20mental%20representations%20using%20human%20behavior&rft.jtitle=Nature%20communications&rft.au=Caplette,%20Laurent&rft.date=2024-05-17&rft.volume=15&rft.issue=1&rft.spage=4183&rft.epage=4183&rft.pages=4183-4183&rft.artnum=4183&rft.issn=2041-1723&rft.eissn=2041-1723&rft_id=info:doi/10.1038/s41467-024-48114-6&rft_dat=%3Cproquest_doaj_%3E3056670139%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c492t-badd84c8ef377b92f63701b5d1c22a6b828831db13d218b765e54ab2d6ed64203%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3056070386&rft_id=info:pmid/38760341&rfr_iscdi=true |