Loading…
Looking is not enough: Multimodal attention supports the real‐time learning of new words
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words duri...
Saved in:
Published in: | Developmental science 2023-03, Vol.26 (2), p.e13290-n/a |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words during free‐flowing toy play, we measured infants’ visual attention and manual actions on to‐be‐learned toys. Parents and 12‐to‐26‐month‐old infants wore wireless head‐mounted eye trackers, allowing them to move freely around a home‐like lab environment. After the play session, infants were tested on their knowledge of object‐label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object‐label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.
To study the real‐time behaviors required for learning new words, we used wireless head‐mounted eye trackers to measure infants’ visual attention and manual actions during parent‐infant interactions. We found that how often parents named objects did not predict learning. Instead, infants’ multimodal attention – when infants’ hands and eyes were attending to the same object – during and around a labeling utterance predicted whether an object‐label mapping was learned, implicating a causal pathway through which infants’ bodily actions play a critical role in early word learning. |
---|---|
ISSN: | 1363-755X 1467-7687 |
DOI: | 10.1111/desc.13290 |