Loading…

Artificial Intelligence in Health Care-Understanding Patient Information Needs and Designing Comprehensible Transparency: Qualitative Study

Artificial intelligence (AI) is as a branch of computer science that uses advanced computational methods such as machine learning (ML), to calculate and/or predict health outcomes and address patient and provider health needs. While these technologies show great promise for improving healthcare, esp...

Full description

Saved in:
Bibliographic Details
Published in:JMIR AI 2023-01, Vol.2 (e46487), p.e46487
Main Authors: Robinson, Renee, Liday, Cara, Lee, Sarah, Williams, Ishan C, Wright, Melanie, An, Sungjoon, Nguyen, Elaine
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial intelligence (AI) is as a branch of computer science that uses advanced computational methods such as machine learning (ML), to calculate and/or predict health outcomes and address patient and provider health needs. While these technologies show great promise for improving healthcare, especially in diabetes management, there are usability and safety concerns for both patients and providers about the use of AI/ML in healthcare management. To support and ensure safe use of AI/ML technologies in healthcare, the team worked to better understand: 1) patient information and training needs, 2) the factors that influence patients' perceived value and trust in AI/ML healthcare applications; and 3) on how best to support safe and appropriate use of AI/ML enabled devices and applications among people living with diabetes. To understand general patient perspectives and information needs related to the use of AI/ML in healthcare, we conducted a series of focus groups (n=9) and interviews (n=3) with patients (n=40) and interviews with providers (n=6) in Alaska, Idaho, and Virginia. Grounded Theory guided data gathering, synthesis, and analysis. Thematic content and constant comparison analysis were used to identify relevant themes and sub-themes. Inductive approaches were used to link data to key concepts including preferred patient-provider-interactions, patient perceptions of trust, accuracy, value, assurances, and information transparency. Key summary themes and recommendations focused on: 1) patient preferences for AI/ML enabled device and/or application information; 2) patient and provider AI/ML-related device and/or application training needs; 3) factors contributing to patient and provider trust in AI/ML enabled devices and/or application; and 4) AI/ML-related device and/or application functionality and safety considerations. A number of participant (patients and providers) recommendations to improve device functionality to guide information and labeling mandates (e.g., links to online video resources, and access to 24/7 live in-person or virtual emergency support). Other patient recommendations include: 1) access to practice devices; 2) connection to local supports and reputable community resources; 3) simplified display and alert limits. Recommendations from both patients and providers could be used by Federal Oversight Agencies to improve utilization of AI/ML monitoring of technology use in diabetes, improving device safety and efficacy.
ISSN:2817-1705
2817-1705
DOI:10.2196/46487