Loading…

Reliability of intracerebral hemorrhage classification systems: A systematic review

Background Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. Methods We...

Full description

Saved in:
Bibliographic Details
Published in:International Journal of Stroke 2016-08, Vol.11 (6), p.626-636
Main Authors: Rannikmäe, Kristiina, Woodfield, Rebecca, Anderson, Craig S, Charidimou, Andreas, Chiewvit, Pipat, Greenberg, Steven M, Jeng, Jiann-Shing, Meretoja, Atte, Palm, Frederic, Putaala, Jukka, Rinkel, Gabriel JE, Rosand, Jonathan, Rost, Natalia S, Strbian, Daniel, Tatlisumak, Turgut, Tsai, Chung-Fen, Wermer, Marieke JH, Werring, David, Yeh, Shin-Joe, Al-Shahi Salman, Rustam, Sudlow, Cathie LM
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. Methods We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies’ characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. Summary of review We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78–0.97 [six studies, 518 cases], mechanistic 0.89–0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80–1 [three studies, 137 cases], mechanistic 0.92–0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Conclusions Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines.
ISSN:1747-4930
1747-4949
1747-4949
DOI:10.1177/1747493016641962