Loading…

Online physician ratings fail to predict actual performance on measures of quality, value, and peer review

Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed phys...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the American Medical Informatics Association : JAMIA 2018-04, Vol.25 (4), p.401-407
Main Authors: Daskivich, Timothy J, Houman, Justin, Fuller, Garth, Black, Jeanne T, Kim, Hyung L, Spiegel, Brennan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.
ISSN:1067-5027
1527-974X
DOI:10.1093/jamia/ocx083