Loading…
Reviewing the quality of discourse information measures in aphasia
Background Discourse is fundamental to everyday communication, and is an increasing focus of clinical assessment, intervention and research. Aphasia can affect the information a speaker communicates in discourse. Little is known about the psychometrics of the tools for measuring information in disco...
Saved in:
Published in: | International journal of language & communication disorders 2017-11, Vol.52 (6), p.689-732 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Background
Discourse is fundamental to everyday communication, and is an increasing focus of clinical assessment, intervention and research. Aphasia can affect the information a speaker communicates in discourse. Little is known about the psychometrics of the tools for measuring information in discourse, which means it is unclear whether these measures are of sufficient quality to be used as clinical outcome measures or diagnostic tools.
Aims
To profile the measures used to describe information in aphasic discourse, and to assess the quality of these measures against standard psychometric criteria.
Methods & Procedures
A scoping review method was employed. Studies were identified using a systematic search of Scopus, Medline and Embase databases. Standard psychometric criteria were used to evaluate the measures’ psychometric properties.
Main contribution
The current review summarizes and collates the information measures used to describe aphasic discourse, and evaluates their quality in terms of the psychometric properties of acceptability, reliability and validity. Seventy‐six studies described 58 discourse information measures, with a mean of 2.28 measures used per study (SD = 1.29, range = 1–7). Measures were classified as ‘functional’ measures (n = 33), which focused on discourse macrostructure, and ‘functional and structural’ measures (n = 25), which focused on micro‐linguistic and macro‐structural approaches to discourse. There were no reports of the acceptability of data generated by the measures (distribution of scores, missing data). Test–retest reliability was reported for just 8/58 measures with 3/8 > 0.80. Intra‐rater reliability was reported for 9/58 measures and in all cases percentage agreement was reported rather than reliability. Per cent agreement was also frequently reported for inter‐rater reliability, with only 4/76 studies reporting reliability statistics for 12/58 measures; this was generally high (>.80 for 11/12 measures). The majority of measures related clearly to the discourse production model indicating content validity. A total of 36/58 measures were used to make 41 comparisons between participants with aphasia (PWA) and neurologically healthy participants (NHP), with 31/41 comparisons showing a difference between the groups. Four comparisons were made between discourse genres, with two measures showing a difference between genres, and two measures showing no difference.
Conclusions
There is currently insufficient information av |
---|---|
ISSN: | 1368-2822 1460-6984 |
DOI: | 10.1111/1460-6984.12318 |