Loading…

Trait Ratings for Automated Essay Grading

This study employed an automated grader to evaluate essays, both holistically and with the rating of traits (content, organization, style, mechanics, and creativity) for Webbased student essays serving as placement tests at a large Midwestern university. The authors report the results of two combine...

Full description

Saved in:
Bibliographic Details
Published in:Educational and psychological measurement 2002-02, Vol.62 (1), p.5-18
Main Authors: Shermis, Mark D., Koch, Chantal Mees, Page, Ellis B., Keith, Timothy Z., Harrington, Susanmarie
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study employed an automated grader to evaluate essays, both holistically and with the rating of traits (content, organization, style, mechanics, and creativity) for Webbased student essays serving as placement tests at a large Midwestern university. The authors report the results of two combined experiments, based on random selection from 1,193 essays. In the first experiment, the essays of 807 students were used to create statistical predictions for the essay-grading software. In the second experiment, the ratings from a separate, random sample of 386 essays were used to compare the ratings of six human judges against those generated by the computer. The interjudge correlation of the human raters alone was r = .71. But the interrater reliability of all six judges in combination with computer scoring reached .83. The essay-grading software was an efficient means for evaluating the essays, with a capacity for grading approximately six documents every second. Other potential feedback measures for use in writing courses are also discussed.
ISSN:0013-1644
1552-3888
DOI:10.1177/001316440206200101