Loading…
Limited attention and discourse structure
The stack model of hierarchical recency in discourse proposed by Barbara J. Grosz & Candace L. Sidner (see abstract 8700973) is argued not to account for two types of data: (1) the infelicity of utterances dependent on access to discourse entities that become hierarchically recent after a relati...
Saved in:
Published in: | Computational linguistics - Association for Computational Linguistics 1996-06, Vol.22 (2), p.255-264 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The stack model of hierarchical recency in discourse proposed by Barbara J. Grosz & Candace L. Sidner (see abstract 8700973) is argued not to account for two types of data: (1) the infelicity of utterances dependent on access to discourse entities that become hierarchically recent after a relatively long interruptive focus space is removed from the stack, & (2) the frequent occurrence of informationally redundant utterances in natural discourse following long interruptions. A proposed cache model of attentional state, incorporating current evidence of limited human attention, formalizes working memory as a limited-capacity cache & long-term memory as main memory. Discourse processes operate on the cache & therefore must retrieve from main memory any necessary information not already present in the cache; when an intended process is completed, information relevant to it is not "popped" as in the stack model but remains in the cache until displaced by the insertion of new or retrieved information. 2 Figures, 26 References. J. Hitchcock |
---|---|
ISSN: | 0891-2017 1530-9312 |