Loading…
Readability, quality and accuracy of generative artificial intelligence chatbots for commonly asked questions about labor epidurals: a comparison of ChatGPT and Bard
•Parents-to-be overwhelmingly search online for pregnancy health information.•ChatGPT and Bard were accurate but needed a high reading level; Bard provided longer, more readable and actionable responses.•Generative AI chatbots should be improved and benchmarked for understanding and readability. Ove...
Saved in:
Published in: | International journal of obstetric anesthesia 2025-02, Vol.61, p.104317, Article 104317 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Parents-to-be overwhelmingly search online for pregnancy health information.•ChatGPT and Bard were accurate but needed a high reading level; Bard provided longer, more readable and actionable responses.•Generative AI chatbots should be improved and benchmarked for understanding and readability.
Over 90% of pregnant women and 76% expectant fathers search for pregnancy health information. We examined readability, accuracy and quality of answers to common obstetric anesthesia questions from the popular generative artificial intelligence (AI) chatbots ChatGPT and Bard.
Twenty questions for generative AI chatbots were derived from frequently asked questions based on professional society, hospital and consumer websites. ChatGPT and Bard were queried in November 2023. Answers were graded for accuracy by four obstetric anesthesiologists. Quality was measured using Patient Education Materials Assessment Tool for Print (PEMAT). Readability was measured using six readability indices. Accuracy, quality and readability were compared using independent t-test.
Bard readability scores were high school level, significantly easier than ChatGPT’s college level by all scoring metrics (P |
---|---|
ISSN: | 0959-289X 1532-3374 1532-3374 |
DOI: | 10.1016/j.ijoa.2024.104317 |