Loading…
An Investigation of Disagreement in Causality Assessment of Adverse Drug Reactions
Background: Causality assessment is used to determine the likelihood that a drug caused a particular adverse event. There are multiple methods for assessing the causality of suspected adverse drug reactions (ADRs). Undertaking some form of causality assessment of suspected ADRs is part of pharmacovi...
Saved in:
Published in: | Pharmaceutical medicine 2011-02, Vol.25 (1), p.17 |
---|---|
Main Authors: | , , , , , , , , , , |
Format: | Article |
Language: | English |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Background: Causality assessment is used to determine the likelihood that a drug caused a particular adverse event. There are multiple methods for assessing the causality of suspected adverse drug reactions (ADRs). Undertaking some form of causality assessment of suspected ADRs is part of pharmacovigilance practice, but potentially without value if reproducibility of the results is consistently poor, and may vary with the background and experience of the assessor. Objective: The aim of this study was to compare inter-assessor agreements for causality assessment of epidemiological study data from an individual perspective and between individuals from different healthcare backgrounds. Study Methods: Six assessors (two pharmacists, two physicians and two nurses), assessed 200 ADR reports for causality using the Naranjo ADR Probability Scale, the Venulet algorithm and the WHO causality term assessment criteria. Agreement between assessors using the same algorithms was examined, and agreement between the algorithms for the same assessor was also measured. Results: For all methods, the majority of the causality assessments resulted in 'probable' or 'possible' categorization. Physician and pharmacist assessment was more likely to result in 'definite' or 'certain' causality assessments than nurse assessment, when using the Naranjo and WHO algorithms. Use of the Venulet algorithm resulted in a higher number of 'unlikely' or 'unrelated' assessments than the other two methods. The inter-assessor agreement measured was no greater than 'fair' (weighted kappa [κ w] = 0.31) for any comparison between raters, and for three comparisons, inter-assessor agreement was less than that expected by chance. Conversely, the weighted observed proportion of agreement, Po (w), was good (>0.6) for all assessments. Intra-assessor agreement between scales was highest for the Naranjo algorithm versus the WHO algorithm, with 'substantial' (κw = 0.61) agreement between assessments made by pharmacist 1. The lowest level of agreement within assessors came from nurse 2 when comparing the Naranjo and Venulet algorithms, where agreement was 'slight' (κw = 0.19), though the mean Po (w) for intra-assessor agreement was 0.81. Conclusions: Comparability between assessors was found to be 'fair' or less for the ADR causality assessment methods examined in this study. The most consistent results were produced by the application of the Naranjo algorithm and the least consistent was the Venulet algorithm |
---|---|
ISSN: | 1178-2595 1179-1993 |
DOI: | 10.2165/11539800-000000000-00000 |