Loading…

Application of artificial intelligence using a novel EUS-based convolutional neural network model to identify and distinguish benign and malignant hepatic masses

Detection and characterization of focal liver lesions (FLLs) is key for optimizing treatment for patients who may have a primary hepatic cancer or metastatic disease to the liver. This is the first study to develop an EUS-based convolutional neural network (CNN) model for the purpose of identifying...

Full description

Saved in:
Bibliographic Details
Published in:Gastrointestinal endoscopy 2021-05, Vol.93 (5), p.1121-1130.e1
Main Authors: Marya, Neil B., Powers, Patrick D., Fujii-Lau, Larissa, Abu Dayyeh, Barham K., Gleeson, Ferga C., Chen, Shigao, Long, Zaiyang, Hough, David M., Chandrasekhara, Vinay, Iyer, Prasad G., Rajan, Elizabeth, Sanchez, William, Sawas, Tarek, Storm, Andrew C., Wang, Kenneth K., Levy, Michael J.
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Detection and characterization of focal liver lesions (FLLs) is key for optimizing treatment for patients who may have a primary hepatic cancer or metastatic disease to the liver. This is the first study to develop an EUS-based convolutional neural network (CNN) model for the purpose of identifying and classifying FLLs. A prospective EUS database comprising cases of FLLs visualized and sampled via EUS was reviewed. Relevant still images and videos of liver parenchyma and FLLs were extracted. Patient data were then randomly distributed for the purpose of CNN model training and testing. Once a final model was created, occlusion heatmap analysis was performed to assess the ability of the EUS-CNN model to autonomously identify FLLs. The performance of the EUS-CNN for differentiating benign and malignant FLLs was also analyzed. A total of 210,685 unique EUS images from 256 patients were used to train, validate, and test the CNN model. Occlusion heatmap analyses demonstrated that the EUS-CNN model was successful in autonomously locating FLLs in 92.0% of EUS video assets. When evaluating any random still image extracted from videos or physician-captured images, the CNN model was 90% sensitive and 71% specific (area under the receiver operating characteristic [AUROC], 0.861) for classifying malignant FLLs. When evaluating full-length video assets, the EUS-CNN model was 100% sensitive and 80% specific (AUROC, 0.904) for classifying malignant FLLs. This study demonstrated the capability of an EUS-CNN model to autonomously identify FLLs and to accurately classify them as either malignant or benign lesions. [Display omitted]
ISSN:0016-5107
1097-6779
DOI:10.1016/j.gie.2020.08.024