Loading…

DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence

Artificial intelligence (AI) has several uses in the healthcare industry, some of which include healthcare management, medical forecasting, practical making of decisions, and diagnosis. AI technologies have reached human-like performance, but their use is limited since they are still largely viewed...

Full description

Saved in:
Bibliographic Details
Published in:Computer methods and programs in biomedicine 2024-01, Vol.243, p.107879-107879, Article 107879
Main Authors: Wani, Niyaz Ahmad, Kumar, Ravinder, Bedi, Jatin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial intelligence (AI) has several uses in the healthcare industry, some of which include healthcare management, medical forecasting, practical making of decisions, and diagnosis. AI technologies have reached human-like performance, but their use is limited since they are still largely viewed as opaque black boxes. This distrust remains the primary factor for their limited real application, particularly in healthcare. As a result, there is a need for interpretable predictors that provide better predictions and also explain their predictions. This study introduces “DeepXplainer”, a new interpretable hybrid deep learning-based technique for detecting lung cancer and providing explanations of the predictions. This technique is based on a convolutional neural network and XGBoost. XGBoost is used for class label prediction after “DeepXplainer” has automatically learned the features of the input using its many convolutional layers. For providing explanations or explainability of the predictions, an explainable artificial intelligence method known as “SHAP” is implemented. The open-source “Survey Lung Cancer” dataset was processed using this method. On multiple parameters, including accuracy, sensitivity, F1-score, etc., the proposed method outperformed the existing methods. The proposed method obtained an accuracy of 97.43%, a sensitivity of 98.71%, and an F1-score of 98.08. After the model has made predictions with this high degree of accuracy, each prediction is explained by implementing an explainable artificial intelligence method at both the local and global levels. A deep learning-based classification model for lung cancer is proposed with three primary components: one for feature learning, another for classification, and a third for providing explanations for the predictions made by the proposed hybrid (ConvXGB) model. The proposed “DeepXplainer” has been evaluated using a variety of metrics, and the results demonstrate that it outperforms the current benchmarks. Providing explanations for the predictions, the proposed approach may help doctors in detecting and treating lung cancer patients more effectively. •Proposed a hybrid model for lung cancer detection on clinical data.•Explainable AI technique is employed to help healthcare professionals in making more informed decisions.•Explanations are presented with both a local and a global level of explain ability.•Different kinds of plots are visualized for both local and global reasoning.•Performance
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2023.107879