Loading…
Notes From the Field: Automatic Item Generation, Standard Setting, and Learner Performance in Mastery Multiple-Choice Tests
Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates...
Saved in:
Published in: | Evaluation & the health professions 2021-09, Vol.44 (3), p.315-318 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates the consistency of learner performance across sequential tests. Item models, variables for question stems, and mastery standards were established using a consensus process. Two test forms were created using item models. Tests were administered at two training programs. The primary outcome, the test–retest consistency of pass–fail decisions across versions of the test, was 94% (κ = .54). Decision-consistency classification was .85. Item-level consistency was 90% (κ = .77, SE = .03). These findings support the use of automatic item generation to create mastery MCTs which produce consistent pass–fail decisions. This technique broadens the range of assessment methods available to educators that require serial MCT testing, including mastery learning curricula. |
---|---|
ISSN: | 0163-2787 1552-3918 |
DOI: | 10.1177/0163278720908914 |