Loading…
A machine learning model for separating epithelial and stromal regions in oral cavity squamous cell carcinomas using H&E-stained histology images: A multi-center, retrospective study
[Display omitted] •Deep learning model achieved a consistently accurate performance for epithelium segmentation•The model trained on 10x magnification images achieved the best performance.•Morphologic features extracted from human- and AI-annotated epithelial regions are equivalent. Tissue slides fr...
Saved in:
Published in: | Oral oncology 2022-08, Vol.131, p.105942-105942, Article 105942 |
---|---|
Main Authors: | , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | [Display omitted]
•Deep learning model achieved a consistently accurate performance for epithelium segmentation•The model trained on 10x magnification images achieved the best performance.•Morphologic features extracted from human- and AI-annotated epithelial regions are equivalent.
Tissue slides from Oral cavity squamous cell carcinoma (OC-SCC), particularly the epithelial regions, hold morphologic features that are both diagnostic and prognostic. Yet, previously developed approaches for automated epithelium segmentation in OC-SCC have not been independently tested in a multi-center setting. In this study, we aimed to investigate the effectiveness and applicability of a convolutional neural network (CNN) model to perform epithelial segmentation using digitized H&E-stained diagnostic slides from OC-SCC patients in a multi-center setting.
A CNN model was developed to segment the epithelial regions of digitized slides (n = 810), retrospectively collected from five different centers. Deep learning models were trained and validated using well-annotated tissue microarray (TMA) images (n = 212) at various magnifications. The best performing model was locked down and used for independent testing with a total of 478 whole-slide images (WSIs). Manually annotated epithelial regions were used as the reference standard for evaluation. We also compared the model generated results with IHC-stained epithelium (n = 120) as the reference.
The locked-down CNN model trained on the TMA image training cohorts with 10x magnification achieved the best segmentation performance. The locked-down model performed consistently and yielded Pixel Accuracy, Recall Rate, Precision Rate, and Dice Coefficient that ranged from 95.8% to 96.6%, 79.1% to 93.8%, 85.7% to 89.3%, and 82.3% to 89.0%, respectively for the three independent testing WSI cohorts.
The automated model achieved a consistently accurate performance for automated epithelial region segmentation compared to manual annotations. This model could be integrated into a computer-aided diagnosis or prognosis system. |
---|---|
ISSN: | 1368-8375 1879-0593 |
DOI: | 10.1016/j.oraloncology.2022.105942 |