Loading…

Mapping multidimensional content representations to neural and behavioral expressions of episodic memory

•We used encoding models to map semantic content to fMRI activity patterns.•Visual and parietal cortex yielded reconstructions of viewed and recalled content.•Parietal reconstructions were invariant to content format (viewed, recalled).•fMRI reconstructions of recalled content aligned with measures...

Full description

Saved in:
Bibliographic Details
Published in:NeuroImage (Orlando, Fla.) Fla.), 2023-08, Vol.277, p.120222-120222, Article 120222
Main Authors: Wang, Yingying, Lee, Hongmi, Kuhl, Brice A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We used encoding models to map semantic content to fMRI activity patterns.•Visual and parietal cortex yielded reconstructions of viewed and recalled content.•Parietal reconstructions were invariant to content format (viewed, recalled).•fMRI reconstructions of recalled content aligned with measures of verbal recall.•Ventral temporal cortex reconstructions predicted idiosyncratic details in recall. Human neuroimaging studies have shown that the contents of episodic memories are represented in distributed patterns of neural activity. However, these studies have mostly been limited to decoding simple, unidimensional properties of stimuli. Semantic encoding models, in contrast, offer a means for characterizing the rich, multidimensional information that comprises episodic memories. Here, we extensively sampled four human fMRI subjects to build semantic encoding models and then applied these models to reconstruct content from natural scene images as they were viewed and recalled from memory. First, we found that multidimensional semantic information was successfully reconstructed from activity patterns across visual and lateral parietal cortices, both when viewing scenes and when recalling them from memory. Second, whereas visual cortical reconstructions were much more accurate when images were viewed versus recalled from memory, lateral parietal reconstructions were comparably accurate across visual perception and memory. Third, by applying natural language processing methods to verbal recall data, we showed that fMRI-based reconstructions reliably matched subjects’ verbal descriptions of their memories. In fact, reconstructions from ventral temporal cortex more closely matched subjects’ own verbal recall than other subjects’ verbal recall of the same images. Fourth, encoding models reliably transferred across subjects: memories were successfully reconstructed using encoding models trained on data from entirely independent subjects. Together, these findings provide evidence for successful reconstructions of multidimensional and idiosyncratic memory representations and highlight the differential sensitivity of visual cortical and lateral parietal regions to information derived from the external visual environment versus internally-generated memories.
ISSN:1053-8119
1095-9572
DOI:10.1016/j.neuroimage.2023.120222