Loading…

Assessment of Reproducibility and Agreement of the IDEAL Classification for Distal Radius Fractures

To analyze the reproducibility and intra- and interobserver agreement of the IDEAL classification for distal radius fractures.  This qualitative, analytical study evaluated 50 pairs of radiographs in two views of patients with distal radius fractures. There were ten observers with different levels o...

Full description

Saved in:
Bibliographic Details
Published in:Revista brasileira de ortopedia 2024-12, Vol.59 (6), p.e901-e906
Main Authors: Lima, João Victor da Rocha, Silva, Lucas Araújo, Feitosa, Antonio Guilherme Chagas Silva, Medeiros, Rafael Lima, Carvalho, Luis Fernando Martins, Moura, Bruno Wilson da Silva
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To analyze the reproducibility and intra- and interobserver agreement of the IDEAL classification for distal radius fractures.  This qualitative, analytical study evaluated 50 pairs of radiographs in two views of patients with distal radius fractures. There were ten observers with different levels of orthopedic training who assessed the radiographs in three different evaluations. The results underwent the Cohen and Fleiss Kappa tests to determine intra- and interobserver agreement levels. Statistical calculations used Excel and SPSS, version 26.0.  The Cohen Kappa index values for intraobserver evaluation indicated poor to little agreement (-0.177-0.259), with statistical significance in only one instance. The Fleiss Kappa index values revealed little agreement among the resident group (0.277-0.383) with statistical significance, poor to little agreement among the general orthopedists (0.114-0.225) with statistical significance in most instances, and moderate agreement among hand surgeons (0.449-0.533) with statistical significance.  The IDEAL classification had interobserver agreement levels ranging from poor to moderate, influenced by the physicians' training level. Other intraobserver agreement levels ranged from poor to little agreement but without statistical significance.
ISSN:0102-3616
1982-4378
DOI:10.1055/s-0044-1792121