Loading…
Validating the revised Test of Spoken English against a criterion of communicative success
A communicative competence orientation was taken to study the validity of test-score inferences derived from the revised Test of Spoken English (TSE). To implement the approach, a sample of undergraduate students, primarily native speakers of English, provided a variety of reactions to, and judgemen...
Saved in:
Published in: | Language testing 1999-10, Vol.16 (4), p.399-425 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | A communicative competence orientation was taken to study the validity of test-score
inferences derived from the revised Test of Spoken English (TSE). To implement the
approach, a sample of undergraduate students, primarily native speakers of English,
provided a variety of reactions to, and judgements of, the test responses of a
sample of TSE examinees. The TSE scores of these examinees, previously determined by
official TSE raters, spanned the full range of TSE score levels. Undergraduate
students were selected as ‘evaluators’ because they, more than
most other groups, are likely to interact with TSE examinees, many of whom become
teaching assistants.
Student evaluations were captured by devising and administering a secondary listening
test (SLT) to assess students’ understanding of TSE examinees’
speech, as represented by their taped responses to tasks on the TSE. The objective
was to determine the degree to which official TSE scores are predictive of
listeners’ ability to understand the messages conveyed by TSE examinees.
Analyses revealed a strong association between TSE score levels and the judgements,
reactions and understanding of listeners. This finding applied to all TSE tasks and
to nearly all of the several different kinds of evaluations made by listeners. Along
with other information, the evidence gathered here should help the TSE program meet
professional standards for test validation. The procedures may also prove useful in
future test-development efforts as a way of determining the difficulty of speaking
tasks (and possibly writing tasks). |
---|---|
ISSN: | 0265-5322 1477-0946 |
DOI: | 10.1177/026553229901600401 |