Loading…
New evidence for chunk-based models in word segmentation
There is large evidence that infants are able to exploit statistical cues to discover the words of their language. However, how they proceed to do so is the object of enduring debates. The prevalent position is that words are extracted from the prior computation of statistics, in particular the tran...
Saved in:
Published in: | Acta psychologica 2014-06, Vol.149 (Jun), p.1-8 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | There is large evidence that infants are able to exploit statistical cues to discover the words of their language. However, how they proceed to do so is the object of enduring debates. The prevalent position is that words are extracted from the prior computation of statistics, in particular the transitional probabilities between syllables. As an alternative, chunk-based models posit that the sensitivity to statistics results from other processes, whereby many potential chunks are considered as candidate words, then selected as a function of their relevance. These two classes of models have proven to be difficult to dissociate. We propose here a procedure, which leads to contrasted predictions regarding the influence of a first language, L1, on the segmentation of a second language, L2. Simulations run with PARSER (Perruchet & Vinter, 1998), a chunk-based model, predict that when the words of L1 become word-external transitions of L2, learning of L2 should be depleted until reaching below chance level, at least before extensive exposure to L2 reverses the effect. In the same condition, a transitional-probability based model predicts above-chance performance whatever the duration of exposure to L2. PARSER's predictions were confirmed by experimental data: Performance on a two-alternative forced choice test between words and part-words from L2 was significantly below chance even though part-words were less cohesive in terms of transitional probabilities than words.
•How a first artificial language, L1, influences the segmentation of a second one, L2?•L1 was designed to contrast two models of word segmentation.•A transitional-probability model predicts above-chance performance on L2.•By contrast, a chunk-based model (PARSER) predicts below-chance performance on L2.•PARSER's predictions were confirmed by experimental data collected in adults. |
---|---|
ISSN: | 0001-6918 1873-6297 |
DOI: | 10.1016/j.actpsy.2014.01.015 |