Loading…
The Impact of Different Scoring Rubrics for Grading Virtual Patient-Based Exams
Virtual patient cases (VPs) are used for healthcare education and assessment. Most VP systems track user interactions to be used for assessment. Few studies have investigated how virtual exam cases should be scored and graded. We have applied eight different scoring models on a data set from 154 stu...
Saved in:
Published in: | Journal of educational computing research 2014-01, Vol.50 (1), p.97-118 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Virtual patient cases (VPs) are used for healthcare education and assessment. Most VP systems track user interactions to be used for assessment. Few studies have investigated how virtual exam cases should be scored and graded. We have applied eight different scoring models on a data set from 154 students. Issues studied included the impact of penalizing guessing, requiring a correct diagnose, different grading levels, and the effect of using weighted diagnose metrics. Controlling the random-guessing approach is necessary and can be accomplished by a rubric that measures a relative efficiency of the learner's inquiries and the total number of inquiries. Using a straight percentage score versus a curved exam score had a major impact on grades. Significant differences were found when using different metrics as only one of the eight rubric models resulted in a Gaussian distribution. Course directors need to analyze expected learning outcomes from a course to determine a scoring metric to assess those particular needs; the grading rubric must also control for guessing. |
---|---|
ISSN: | 0735-6331 1541-4140 1541-4140 |
DOI: | 10.2190/EC.50.1.e |