Loading…
Development, Implementation, and Evaluation Methods for Dashboards in Health Care: Scoping Review
Dashboards have become ubiquitous in health care settings, but to achieve their goals, they must be developed, implemented, and evaluated using methods that help ensure they meet the needs of end users and are suited to the barriers and facilitators of the local context. This scoping review aimed to...
Saved in:
Published in: | JMIR medical informatics 2024-12, Vol.12, p.e59828-e59828 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Dashboards have become ubiquitous in health care settings, but to achieve their goals, they must be developed, implemented, and evaluated using methods that help ensure they meet the needs of end users and are suited to the barriers and facilitators of the local context.
This scoping review aimed to explore published literature on health care dashboards to characterize the methods used to identify factors affecting uptake, strategies used to increase dashboard uptake, and evaluation methods, as well as dashboard characteristics and context.
MEDLINE, Embase, Web of Science, and the Cochrane Library were searched from inception through July 2020. Studies were included if they described the development or evaluation of a health care dashboard with publication from 2018-2020. Clinical setting, purpose (categorized as clinical, administrative, or both), end user, design characteristics, methods used to identify factors affecting uptake, strategies to increase uptake, and evaluation methods were extracted.
From 116 publications, we extracted data for 118 dashboards. Inpatient (45/118, 38.1%) and outpatient (42/118, 35.6%) settings were most common. Most dashboards had ≥2 stated purposes (84/118, 71.2%); of these, 54 of 118 (45.8%) were administrative, 43 of 118 (36.4%) were clinical, and 20 of 118 (16.9%) had both purposes. Most dashboards included frontline clinical staff as end users (97/118, 82.2%). To identify factors affecting dashboard uptake, half involved end users in the design process (59/118, 50%); fewer described formative usability testing (26/118, 22%) or use of any theory or framework to guide development, implementation, or evaluation (24/118, 20.3%). The most common strategies used to increase uptake included education (60/118, 50.8%); audit and feedback (59/118, 50%); and advisory boards (54/118, 45.8%). Evaluations of dashboards (84/118, 71.2%) were mostly quantitative (60/118, 50.8%), with fewer using only qualitative methods (6/118, 5.1%) or a combination of quantitative and qualitative methods (18/118, 15.2%).
Most dashboards forego steps during development to ensure they suit the needs of end users and the clinical context; qualitative evaluation-which can provide insight into ways to improve dashboard effectiveness-is uncommon. Education and audit and feedback are frequently used to increase uptake. These findings illustrate the need for promulgation of best practices in dashboard development and will be useful to dashboard planners. |
---|---|
ISSN: | 2291-9694 2291-9694 |
DOI: | 10.2196/59828 |