Loading…

Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy

This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs f...

Full description

Saved in:
Bibliographic Details
Published in:Advances in medical education and practice 2024-01, Vol.15, p.393-400
Main Authors: Bharatha, Ambadasu, Ojeh, Nkemcho, Fazle Rabbi, Ahbab Mohammad, Campbell, Michael H, Krishnamurthy, Kandamaran, Layne-Yarde, Rhaheem N A, Kumar, Alok, Springer, Dale C R, Connell, Kenneth L, Majumder, Md Anwarul Azim
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p
ISSN:1179-7258
1179-7258
DOI:10.2147/AMEP.S457408