Loading…

Using Generalizability Theory to Estimate the Reliability of Writing Scores Derived from Holistic and Analytical Scoring Methods

Issues surrounding the psychometric properties of writing assessments have received ongoing attention. However, the reliability estimates of scores derived from various holistic and analytical scoring strategies reported in the literature have relied on classical test theory (CT), which accounts for...

Full description

Saved in:
Bibliographic Details
Published in:Educational and psychological measurement 1999-06, Vol.59 (3), p.492-506
Main Authors: Swartz, Carl W., Hooper, Stephen R., Montgomery, James W., Wakely, Melissa B., de Kruif, Renee E. L., Reed, Martha, Brown, Timothy T., Levine, Melvin D., White, Kinnard P.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Issues surrounding the psychometric properties of writing assessments have received ongoing attention. However, the reliability estimates of scores derived from various holistic and analytical scoring strategies reported in the literature have relied on classical test theory (CT), which accounts for only a single source of variance within a given analysis. Generalizability theory (GT) is a more powerful and flexible strategy that allows for the simultaneous estimation of multiple sources of error variance to estimate the reliability of test scores. Using GT, two studies were conducted to investigate the impact of the number of raters and the type of decision (relative vs. absolute) on the reliability of writing scores. The results of both studies indicated that the reliability coefficients for writing scores decline as (a) the number of raters is reduced and (b) when absolute decisions rather than relative decisions are made.
ISSN:0013-1644
1552-3888
DOI:10.1177/00131649921970008