Loading…

Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs

The aim of this study was to compare time and storage space requirements, diagnostic performance, and consistency among 3 image recognition convolutional neural networks (CNNs) in the evaluation of the relationships between the mandibular third molar and the mandibular canal on panoramic radiographs...

Full description

Saved in:
Bibliographic Details
Published in:Oral surgery, oral medicine, oral pathology and oral radiology oral medicine, oral pathology and oral radiology, 2020-09, Vol.130 (3), p.336-343
Main Authors: Fukuda, Motoki, Ariji, Yoshiko, Kise, Yoshitaka, Nozawa, Michihito, Kuwada, Chiaki, Funakoshi, Takuma, Muramatsu, Chisako, Fujita, Hiroshi, Katsumata, Akitoshi, Ariji, Eiichiro
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The aim of this study was to compare time and storage space requirements, diagnostic performance, and consistency among 3 image recognition convolutional neural networks (CNNs) in the evaluation of the relationships between the mandibular third molar and the mandibular canal on panoramic radiographs. Of 600 panoramic radiographs, 300 each were assigned to noncontact and contact groups based on the relationship between the mandibular third molar and the mandibular canal. The CNNs were trained twice by using cropped image patches with sizes of 70 × 70 pixels and 140 × 140 pixels. Time and storage space were measured for each system. Accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were determined. Intra-CNN and inter-CNN consistency values were calculated. Time and storage space requirements depended on the depth of CNN layers and number of learned parameters, respectively. The highest AUC values ranged from 0.88 to 0.93 in the CNNs created by 70 × 70 pixel patches, but there were no significant differences in diagnostic performance among any of the models with smaller patches. Intra-CNN and inter-CNN consistency values were good or very good for all CNNs. The size of the image patches should be carefully determined to ensure acquisition of high diagnostic performance and consistency.
ISSN:2212-4403
2212-4411
DOI:10.1016/j.oooo.2020.04.005