Loading…
Validating a computerized scoring system for assessing writing and placing students in composition courses
How do scores from writing samples generated by computerized essay scorers compare to those generated by “untrained” human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating...
Saved in:
Published in: | Assessing writing 2006, Vol.11 (3), p.167-178 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | How do scores from writing samples generated by computerized essay scorers compare to those generated by “untrained” human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample scores generated by the
IntelliMetric™ automated scoring system and scores generated by University Preparation English faculty, as well as examining the predictive validity of both the automated and human scores. The results revealed significant correlations between the faculty scores and the
IntelliMetric™ scores of the
ACCUPLACER™
OnLine WritePlacer Plus test. Moreover, logistic regression models that utilized the
IntelliMetric™ scores and average faculty scores were more accurate at placing students (77% overall correct placement rate) than were models incorporating only the average faculty score or the
IntelliMetric™ scores. |
---|---|
ISSN: | 1075-2935 1873-5916 |
DOI: | 10.1016/j.asw.2007.01.002 |