Loading…
Interactive web application for Explainable DNN-based AI models in oncology
The lack of interpretability of Deep Neural Network-based Artificial Intelligence (AI) models prevents their utilization in healthcare, despite the exhibited huge success of Deep Neural Networks (DNNs) in Computer Vision and Bioinformatics. The development of such high performing DNN models that are...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The lack of interpretability of Deep Neural Network-based Artificial Intelligence (AI) models prevents their utilization in healthcare, despite the exhibited huge success of Deep Neural Networks (DNNs) in Computer Vision and Bioinformatics. The development of such high performing DNN models that are explainable and visualized on a web application is presented in this study, to contribute to the acceptance and implementation of DNN-based AI models in the field of medical imaging and oncology in clinical practice. Clinicians can interact with the developed model altering its features, that are meaningful to them, and gradually understand, verify/evaluate the model behavior and the underlying prediction mechanism, and eventually trust it. More specifically, the model in our proof-of-concept app uses segmented thoracic computed tomography images as input and demonstrates excellent performance for lung nodule malignancy classification with inherent interpretability and interactivity. It integrates DNN-predicted tumor biomarkers from a concept bottleneck model and clinically validated expert-derived radiomics features. Feedback from clinicians on model evaluation using our app will highly contribute to further improving of both the algorithms and the interactive environment. |
---|---|
ISSN: | 2577-0829 |
DOI: | 10.1109/NSSMICRTSD49126.2023.10338566 |