Loading…

Assessment of the reliability and usability of ChatGPT in response to spinal cord injury questions

The use of artificial intelligence chatbots to obtain information about patients' diseases is increasing. This study aimed to determine the reliability and usability of ChatGPT for spinal cord injury-related questions. Three raters simultaneously evaluated a total of 47 questions on a 7-point L...

Full description

Saved in:
Bibliographic Details
Published in:The journal of spinal cord medicine 2024-06, p.1-6
Main Authors: Özcan, Fatma, Örücü Atar, Merve, Köroğlu, Özlem, Yılmaz, Bilge
Format: Article
Language:English
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of artificial intelligence chatbots to obtain information about patients' diseases is increasing. This study aimed to determine the reliability and usability of ChatGPT for spinal cord injury-related questions. Three raters simultaneously evaluated a total of 47 questions on a 7-point Likert scale for reliability and usability, based on the three most frequently searched keywords in Google Trends ('general information', 'complications' and 'treatment'). Inter-rater Cronbach α scores indicated substantial agreement for both reliability and usability scores (α between 0.558 and 0.839, and α between 0.373 and 0.772, respectively). The highest mean reliability score was for 'complications' (mean 5.38). The lowest average was for the 'general information' section (mean 4.20). The 'treatment' had the highest mean scores for the usability (mean 5.87) and the lowest mean value was recorded in the 'general information' section (mean 4.80). The answers given by ChatGPT to questions related to spinal cord injury were reliable and useful. Nevertheless, it should be kept in mind that ChatGPT may provide incorrect or incomplete information, especially in the 'general information' section, which may mislead patients and their relatives.
ISSN:2045-7723
DOI:10.1080/10790268.2024.2361551