Loading…

Recruitment of magnitude representations to understand graded words

•Investigated understanding of graded words, e.g., calm, annoyed, angry, furious.•Pairwise comparison RTs showed distance, size, and boundary effects.•MDS solutions for pairwise similarity ratings also showed these effects.•Suggests recruitment of magnitude representations to understand words.•By co...

Full description

Saved in:
Bibliographic Details
Published in:Cognitive psychology 2024-09, Vol.153, p.101673, Article 101673
Main Authors: Varma, Sashank, Sanford, Emily M., Marupudi, Vijay, Shaffer, Olivia, Brooke Lea, R.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Investigated understanding of graded words, e.g., calm, annoyed, angry, furious.•Pairwise comparison RTs showed distance, size, and boundary effects.•MDS solutions for pairwise similarity ratings also showed these effects.•Suggests recruitment of magnitude representations to understand words.•By contrast, machine learning models of word semantics did not show such effects. Language understanding and mathematics understanding are two fundamental forms of human thinking. Prior research has largely focused on the question of how language shapes mathematical thinking. The current study considers the converse question. Specifically, it investigates whether the magnitude representations that are thought to anchor understanding of number are also recruited to understand the meanings of graded words. These are words that come in scales (e.g., Anger) whose members can be ordered by the degree to which they possess the defining property (e.g., calm, annoyed, angry, furious). Experiment 1 uses the comparison paradigm to find evidence that the distance, ratio, and boundary effects that are taken as evidence of the recruitment of magnitude representations extend from numbers to words. Experiment 2 uses a similarity rating paradigm and multi-dimensional scaling to find converging evidence for these effects in graded word understanding. Experiment 3 evaluates an alternative hypothesis – that these effects for graded words simply reflect the statistical structure of the linguistic environment – by using machine learning models of distributional word semantics: LSA, word2vec, GloVe, counterfitted word vectors, BERT, RoBERTa, and GPT-2. These models fail to show the full pattern of effects observed of humans in Experiment 2, suggesting that more is needed than mere statistics. This research paves the way for further investigations of the role of magnitude representations in sentence and text comprehension, and of the question of whether language understanding and number understanding draw on shared or independent magnitude representations. It also informs the role of machine learning models in cognitive psychology research.
ISSN:0010-0285
1095-5623
1095-5623
DOI:10.1016/j.cogpsych.2024.101673