Loading…

Peer‐to‐peer validation of Ki‐67 scoring in a pathology quality circle as a tool to assess interobserver variability: are we better than we thought?

Ki‐67, a nuclear protein expressed in all stages of cellular proliferation, is a valuable tool to assess tumor proliferation and has been linked to more aggressive tumor behavior. However, interlaboratory staining heterogeneity and inter‐observer variability challenge its reproducibility. Round Robi...

Full description

Saved in:
Bibliographic Details
Published in:APMIS : acta pathologica, microbiologica et immunologica Scandinavica microbiologica et immunologica Scandinavica, 2024-10, Vol.132 (10), p.718-727
Main Authors: Bernhardt, Marit, Weinhold, Leonie, Sanders, Christine, Hommerding, Oliver, Lau, Jan‐Frederic, Toma, Marieta, Tischler, Verena, Schmid, Matthias, Zienkiewicz, Tomasz, Hildenbrand, Ralf, Gerlach, Peter, Zhou, Hui, Braun, Martin, Müller, Gunnar, Sieber, Erich, Marko, Christian, Kristiansen, Glen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Ki‐67, a nuclear protein expressed in all stages of cellular proliferation, is a valuable tool to assess tumor proliferation and has been linked to more aggressive tumor behavior. However, interlaboratory staining heterogeneity and inter‐observer variability challenge its reproducibility. Round Robin tests are a suitable tool to standardize and harmonize immunohistochemical and molecular analyses in histopathology. The study investigates the interrater and interlaboratory reproducibility of Ki‐67‐scoring using both manual and automated approaches. Unstained TMA slides comprising diverse tumor types (breast cancer, neuroendocrine tumors, lymphomas, and head and neck squamous cell carcinoma) were distributed to six pathology laboratories, each employing their routine staining protocols. Manual and automated scoring methods were applied, and interrater and interlaboratory agreement assessed using intraclass correlation coefficients (ICC). The results highlight good‐to‐excellent reliability overall, with automated scoring demonstrating higher consistency (ICC 0.955) than manual scoring (ICC 0.871). Results were more variable when looking at the individual entities. Reliability remained good for lymphomas (ICC 0.878) and breast cancer (ICC 0.784) and was poor in well‐differentiated neuroendocrine tumors (ICC 0.354). This study clearly advocates standardized practices and training to ensure consistency in Ki‐67‐assessment, and it demonstrates that this can be achieved in a peer‐to‐peer approach in local quality‐circles.
ISSN:0903-4641
1600-0463
1600-0463
DOI:10.1111/apm.13451