Loading…
Shining a spotlight on scoring in the OSCE: Checklists and item weighting
Introduction: There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where...
Saved in:
Published in: | Medical teacher 2020-09, Vol.42 (9), p.1037-1042 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233 |
---|---|
cites | cdi_FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233 |
container_end_page | 1042 |
container_issue | 9 |
container_start_page | 1037 |
container_title | Medical teacher |
container_volume | 42 |
creator | Homer, Matt Fuller, Richard Hallam, Jennifer Pell, Godfrey |
description | Introduction: There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where 'items' have the potential to become weighted differently, but the literature does not always reflect these broader conceptions.
Methods: We consider theoretical, design and (clinically trained) assessor issues related to differential item weighting in checklist scoring of OSCEs stations. Using empirical evidence, this work also compares candidate decisions and psychometric quality of different item-weighting approaches (i.e. a simple 'unweighted' scheme versus a differentially weighted one).
Results: The impact of different weighting schemes affect approximately 30% of the key borderline group of candidates, and 3% of candidates overall. We also find that measures of overall assessment quality are a little better under the differentially weighted scoring system.
Discussion and conclusion: Differentially weighted modern checklists can contribute to valid assessment outcomes, and bring a range of additional benefits to the assessment. Judgment about weighting of particular items should be considered a key design consideration during station development and must align to clinical assessor expectations of the relative importance of sub-tasks. |
doi_str_mv | 10.1080/0142159X.2020.1781072 |
format | article |
fullrecord | <record><control><sourceid>proquest_infor</sourceid><recordid>TN_cdi_proquest_miscellaneous_2419411552</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3087490409</sourcerecordid><originalsourceid>FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233</originalsourceid><addsrcrecordid>eNp9kE1rGzEQhkVJqR23P6FFkEsv646-rFVPKSZJA4EckoBvQtqVYqW7kiOtCfn32cVODz30NDA87zvDg9BXAksCNfwAwikRarOkQMeVrAlI-gHNCV-tKlLLzQmaT0w1QTN0WsoTAAilxCc0Y3QFNQM2R9d32xBDfMQGl10auvC4HXCKuDQpT-sQ8bB1-PZuffETr7eu-dOFMhRsYovD4Hr84qbIiH5GH73pivtynAv0cHlxv_5d3dxeXa9_3VQN52SorPFyZRVhkilKW2s9b4iouRfMgleOK2dsKwm33nEhQHJuBXOtsaxuJWVsgb4fenc5Pe9dGXQfSuO6zkSX9kVTThQnRAg6omf_oE9pn-P4nWZQS66AgxopcaCanErJzutdDr3Jr5qAnlzrd9d6cq2Prsfct2P73vau_Zt6lzsC5wcgRJ9yb15S7lo9mNcuZZ9NbML4x_9vvAGMiYxT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3087490409</pqid></control><display><type>article</type><title>Shining a spotlight on scoring in the OSCE: Checklists and item weighting</title><source>Applied Social Sciences Index & Abstracts (ASSIA)</source><source>Taylor and Francis:Jisc Collections:Taylor and Francis Read and Publish Agreement 2024-2025:Medical Collection (Reading list)</source><creator>Homer, Matt ; Fuller, Richard ; Hallam, Jennifer ; Pell, Godfrey</creator><creatorcontrib>Homer, Matt ; Fuller, Richard ; Hallam, Jennifer ; Pell, Godfrey</creatorcontrib><description>Introduction: There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where 'items' have the potential to become weighted differently, but the literature does not always reflect these broader conceptions.
Methods: We consider theoretical, design and (clinically trained) assessor issues related to differential item weighting in checklist scoring of OSCEs stations. Using empirical evidence, this work also compares candidate decisions and psychometric quality of different item-weighting approaches (i.e. a simple 'unweighted' scheme versus a differentially weighted one).
Results: The impact of different weighting schemes affect approximately 30% of the key borderline group of candidates, and 3% of candidates overall. We also find that measures of overall assessment quality are a little better under the differentially weighted scoring system.
Discussion and conclusion: Differentially weighted modern checklists can contribute to valid assessment outcomes, and bring a range of additional benefits to the assessment. Judgment about weighting of particular items should be considered a key design consideration during station development and must align to clinical assessor expectations of the relative importance of sub-tasks.</description><identifier>ISSN: 0142-159X</identifier><identifier>EISSN: 1466-187X</identifier><identifier>DOI: 10.1080/0142159X.2020.1781072</identifier><identifier>PMID: 32608303</identifier><language>eng</language><publisher>England: Taylor & Francis</publisher><subject>assessment quality ; Assessors ; Candidates ; Check Lists ; checklist design ; Checklists ; Evaluation ; item weighting ; OSCE scoring ; Psychometrics ; Scoring ; Weighting</subject><ispartof>Medical teacher, 2020-09, Vol.42 (9), p.1037-1042</ispartof><rights>2020 Informa UK Limited, trading as Taylor & Francis Group 2020</rights><rights>2020 Informa UK Limited, trading as Taylor & Francis Group</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233</citedby><cites>FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233</cites><orcidid>0000-0002-1161-5938</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925,30999</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32608303$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Homer, Matt</creatorcontrib><creatorcontrib>Fuller, Richard</creatorcontrib><creatorcontrib>Hallam, Jennifer</creatorcontrib><creatorcontrib>Pell, Godfrey</creatorcontrib><title>Shining a spotlight on scoring in the OSCE: Checklists and item weighting</title><title>Medical teacher</title><addtitle>Med Teach</addtitle><description>Introduction: There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where 'items' have the potential to become weighted differently, but the literature does not always reflect these broader conceptions.
Methods: We consider theoretical, design and (clinically trained) assessor issues related to differential item weighting in checklist scoring of OSCEs stations. Using empirical evidence, this work also compares candidate decisions and psychometric quality of different item-weighting approaches (i.e. a simple 'unweighted' scheme versus a differentially weighted one).
Results: The impact of different weighting schemes affect approximately 30% of the key borderline group of candidates, and 3% of candidates overall. We also find that measures of overall assessment quality are a little better under the differentially weighted scoring system.
Discussion and conclusion: Differentially weighted modern checklists can contribute to valid assessment outcomes, and bring a range of additional benefits to the assessment. Judgment about weighting of particular items should be considered a key design consideration during station development and must align to clinical assessor expectations of the relative importance of sub-tasks.</description><subject>assessment quality</subject><subject>Assessors</subject><subject>Candidates</subject><subject>Check Lists</subject><subject>checklist design</subject><subject>Checklists</subject><subject>Evaluation</subject><subject>item weighting</subject><subject>OSCE scoring</subject><subject>Psychometrics</subject><subject>Scoring</subject><subject>Weighting</subject><issn>0142-159X</issn><issn>1466-187X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>7QJ</sourceid><recordid>eNp9kE1rGzEQhkVJqR23P6FFkEsv646-rFVPKSZJA4EckoBvQtqVYqW7kiOtCfn32cVODz30NDA87zvDg9BXAksCNfwAwikRarOkQMeVrAlI-gHNCV-tKlLLzQmaT0w1QTN0WsoTAAilxCc0Y3QFNQM2R9d32xBDfMQGl10auvC4HXCKuDQpT-sQ8bB1-PZuffETr7eu-dOFMhRsYovD4Hr84qbIiH5GH73pivtynAv0cHlxv_5d3dxeXa9_3VQN52SorPFyZRVhkilKW2s9b4iouRfMgleOK2dsKwm33nEhQHJuBXOtsaxuJWVsgb4fenc5Pe9dGXQfSuO6zkSX9kVTThQnRAg6omf_oE9pn-P4nWZQS66AgxopcaCanErJzutdDr3Jr5qAnlzrd9d6cq2Prsfct2P73vau_Zt6lzsC5wcgRJ9yb15S7lo9mNcuZZ9NbML4x_9vvAGMiYxT</recordid><startdate>20200901</startdate><enddate>20200901</enddate><creator>Homer, Matt</creator><creator>Fuller, Richard</creator><creator>Hallam, Jennifer</creator><creator>Pell, Godfrey</creator><general>Taylor & Francis</general><general>Taylor & Francis Ltd</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QJ</scope><scope>K9.</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1161-5938</orcidid></search><sort><creationdate>20200901</creationdate><title>Shining a spotlight on scoring in the OSCE: Checklists and item weighting</title><author>Homer, Matt ; Fuller, Richard ; Hallam, Jennifer ; Pell, Godfrey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>assessment quality</topic><topic>Assessors</topic><topic>Candidates</topic><topic>Check Lists</topic><topic>checklist design</topic><topic>Checklists</topic><topic>Evaluation</topic><topic>item weighting</topic><topic>OSCE scoring</topic><topic>Psychometrics</topic><topic>Scoring</topic><topic>Weighting</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Homer, Matt</creatorcontrib><creatorcontrib>Fuller, Richard</creatorcontrib><creatorcontrib>Hallam, Jennifer</creatorcontrib><creatorcontrib>Pell, Godfrey</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Applied Social Sciences Index & Abstracts (ASSIA)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>MEDLINE - Academic</collection><jtitle>Medical teacher</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Homer, Matt</au><au>Fuller, Richard</au><au>Hallam, Jennifer</au><au>Pell, Godfrey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Shining a spotlight on scoring in the OSCE: Checklists and item weighting</atitle><jtitle>Medical teacher</jtitle><addtitle>Med Teach</addtitle><date>2020-09-01</date><risdate>2020</risdate><volume>42</volume><issue>9</issue><spage>1037</spage><epage>1042</epage><pages>1037-1042</pages><issn>0142-159X</issn><eissn>1466-187X</eissn><abstract>Introduction: There has been a long-running debate about the validity of item-based checklist scoring of performance assessments like OSCEs. In recent years, the conception of a checklist has developed from its dichotomous inception into a more 'key-features' and/or chunked approach, where 'items' have the potential to become weighted differently, but the literature does not always reflect these broader conceptions.
Methods: We consider theoretical, design and (clinically trained) assessor issues related to differential item weighting in checklist scoring of OSCEs stations. Using empirical evidence, this work also compares candidate decisions and psychometric quality of different item-weighting approaches (i.e. a simple 'unweighted' scheme versus a differentially weighted one).
Results: The impact of different weighting schemes affect approximately 30% of the key borderline group of candidates, and 3% of candidates overall. We also find that measures of overall assessment quality are a little better under the differentially weighted scoring system.
Discussion and conclusion: Differentially weighted modern checklists can contribute to valid assessment outcomes, and bring a range of additional benefits to the assessment. Judgment about weighting of particular items should be considered a key design consideration during station development and must align to clinical assessor expectations of the relative importance of sub-tasks.</abstract><cop>England</cop><pub>Taylor & Francis</pub><pmid>32608303</pmid><doi>10.1080/0142159X.2020.1781072</doi><tpages>6</tpages><orcidid>https://orcid.org/0000-0002-1161-5938</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0142-159X |
ispartof | Medical teacher, 2020-09, Vol.42 (9), p.1037-1042 |
issn | 0142-159X 1466-187X |
language | eng |
recordid | cdi_proquest_miscellaneous_2419411552 |
source | Applied Social Sciences Index & Abstracts (ASSIA); Taylor and Francis:Jisc Collections:Taylor and Francis Read and Publish Agreement 2024-2025:Medical Collection (Reading list) |
subjects | assessment quality Assessors Candidates Check Lists checklist design Checklists Evaluation item weighting OSCE scoring Psychometrics Scoring Weighting |
title | Shining a spotlight on scoring in the OSCE: Checklists and item weighting |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T12%3A31%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_infor&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Shining%20a%20spotlight%20on%20scoring%20in%20the%20OSCE:%20Checklists%20and%20item%20weighting&rft.jtitle=Medical%20teacher&rft.au=Homer,%20Matt&rft.date=2020-09-01&rft.volume=42&rft.issue=9&rft.spage=1037&rft.epage=1042&rft.pages=1037-1042&rft.issn=0142-159X&rft.eissn=1466-187X&rft_id=info:doi/10.1080/0142159X.2020.1781072&rft_dat=%3Cproquest_infor%3E3087490409%3C/proquest_infor%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c441t-baf76b91373922dbbf4c1584f53b0f9e49eabd714bfe4550744b53edab38d7233%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3087490409&rft_id=info:pmid/32608303&rfr_iscdi=true |