Loading…

The neural representation of abstract words may arise through grounding word meaning in language itself

In order to describe how humans represent meaning in the brain, one must be able to account for not just concrete words but, critically, also words, which lack a physical referent. Hebbian formalism and optimization are basic principles of brain function, and they provide an appealing approach for m...

Full description

Saved in:
Bibliographic Details
Published in:Human brain mapping 2021-10, Vol.42 (15), p.4973-4984
Main Authors: Hultén, Annika, Vliet, Marijn, Kivisaari, Sasa, Lammi, Lotta, Lindh‐Knuutila, Tiina, Faisal, Ali, Salmelin, Riitta
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In order to describe how humans represent meaning in the brain, one must be able to account for not just concrete words but, critically, also words, which lack a physical referent. Hebbian formalism and optimization are basic principles of brain function, and they provide an appealing approach for modeling word meanings based on word co‐occurrences. We provide proof of concept that a statistical model of the semantic space can account for neural representations of both concrete and words, using MEG. Here, we built a statistical model using word embeddings extracted from a text corpus. This statistical model was used to train a machine learning algorithm to successfully decode the MEG signals evoked by written words. In the model, word ness emerged from the statistical regularities of the language environment. Representational similarity analysis further showed that this salient property of the model co‐varies, at 280–420 ms after visual word presentation, with activity in regions that have been previously linked with processing of words, namely the left‐hemisphere frontal, anterior temporal and superior parietal cortex. In light of these results, we propose that the neural encoding of word meanings can arise through statistical regularities, that is, through grounding in language itself. A statistical model using word embeddings extracted from a text corpus was used to train a machine learning algorithm to successfully decode the MEG signals evoked by written words. In the model, word ness emerged from the statistical regularities of the language environment. The study provides proof of concept that a statistical model of the semantic space can account for neural representations of both concrete and words.
ISSN:1065-9471
1097-0193
DOI:10.1002/hbm.25593