Loading…
Picture This: Presenting Longitudinal Patient-Reported Outcome Research Study Results to Patients
Background. Patient-reported outcome (PRO) results from clinical trials and research studies can inform patient-clinician decision making. However, data presentation issues specific to PROs, such as scaling directionality (higher scores may represent better or worse outcomes) and scoring strategies...
Saved in:
Published in: | Medical decision making 2018-11, Vol.38 (8), p.994-1005 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Background. Patient-reported outcome (PRO) results from clinical trials and research studies can inform patient-clinician decision making. However, data presentation issues specific to PROs, such as scaling directionality (higher scores may represent better or worse outcomes) and scoring strategies (normed v. nonnormed scores), can make the interpretation of PRO scores uniquely challenging. Objective. To identify the association of PRO score directionality, score norming, and other factors on a) how accurately PRO scores are interpreted and b) how clearly they are rated by patients, clinicians, and PRO researchers. Methods. We electronically surveyed adult cancer patients/survivors, oncology clinicians, and PRO researchers and conducted one-on-one cognitive interviews with patients/survivors and clinicians. Participants were randomized to 1 of 3 line graph formats showing longitudinal change: higher scores indicating “better,” “more” (better for function, worse for symptoms), or “normed” to a population average. Quantitative data evaluated interpretation accuracy and clarity. Online survey comments and cognitive interviews were analyzed qualitatively. Results. The Internet sample included 629 patients, 139 clinicians, and 249 researchers; 10 patients and 5 clinicians completed cognitive interviews. “Normed” line graphs were less accurately interpreted than “more” (odds ratio [OR] = 0.76; P = 0.04). “Better” line graphs were more accurately interpreted than both “more” (OR = 1.43; P = 0.01) and “normed” (OR = 1.88; P = 0.04). “Better” line graphs were more likely to be rated clear than “more” (OR = 1.51; P = 0.05). Qualitative data informed interpretation of these findings. Limitations. The survey relied on the online platforms used for distribution and consequent snowball sampling. We do not have information regarding participants’ numeracy/graph literacy. Conclusions. For communicating PROs as line graphs in patient educational materials and decision aids, these results support using graphs, with higher scores consistently indicating better outcomes. |
---|---|
ISSN: | 0272-989X 1552-681X |
DOI: | 10.1177/0272989X18791177 |