Loading…
Development of a lecture evaluation tool rooted in cognitive load theory: A modified Delphi study
Background Didactics play a key role in medical education. There is no standardized didactic evaluation tool to assess quality and provide feedback to instructors. Cognitive load theory provides a framework for lecture evaluations. We sought to develop an evaluation tool, rooted in cognitive load th...
Saved in:
Published in: | AEM education and training 2023-02, Vol.7 (1), p.e10839-n/a |
---|---|
Main Authors: | , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Background
Didactics play a key role in medical education. There is no standardized didactic evaluation tool to assess quality and provide feedback to instructors. Cognitive load theory provides a framework for lecture evaluations. We sought to develop an evaluation tool, rooted in cognitive load theory, to assess quality of didactic lectures.
Methods
We used a modified Delphi method to achieve expert consensus for items in a lecture evaluation tool. Nine emergency medicine educators with expertise in cognitive load participated in three modified Delphi rounds. In the first two rounds, experts rated the importance of including each item in the evaluation rubric on a 1 to 9 Likert scale with 1 labeled as “not at all important” and 9 labeled as “extremely important.” In the third round, experts were asked to make a binary choice of whether the item should be included in the final evaluation tool. In each round, the experts were invited to provide written comments, edits, and suggested additional items. Modifications were made between rounds based on item scores and expert feedback. We calculated descriptive statistics for item scores.
Results
We completed three Delphi rounds, each with 100% response rate. After Round 1, we removed one item, made major changes to two items, made minor wording changes to nine items, and modified the scale of one item. Following Round 2, we eliminated three items, made major wording changes to one item, and made minor wording changes to one item. After the third round, we made minor wording changes to two items. We also reordered and categorized items for ease of use. The final evaluation tool consisted of nine items.
Conclusions
We developed a lecture assessment tool rooted in cognitive load theory specific to medical education. This tool can be applied to assess quality of instruction and provide important feedback to speakers. |
---|---|
ISSN: | 2472-5390 2472-5390 |
DOI: | 10.1002/aet2.10839 |