Loading…

A Secure and Interpretable AI for Smart Healthcare System: A Case Study on Epilepsy Diagnosis Using EEG Signals

The efficient patient-independent and interpretable framework for electroencephalogram (EEG) epileptic seizure detection (ESD) has informative challenges due to the complex pattern of EEG nature. Automated detection of ES is crucial, while Explainable Artificial Intelligence (XAI) is urgently needed...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of biomedical and health informatics 2024-06, Vol.28 (6), p.3236-3247
Main Authors: Ahmad, Ijaz, Zhu, Mingxing, Li, Guanglin, Javeed, Danish, Kumar, Prabhat, Chen, Shixiong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The efficient patient-independent and interpretable framework for electroencephalogram (EEG) epileptic seizure detection (ESD) has informative challenges due to the complex pattern of EEG nature. Automated detection of ES is crucial, while Explainable Artificial Intelligence (XAI) is urgently needed to justify the model detection of epileptic seizures in clinical applications. Therefore, this study implements an XAI-based computer-aided ES detection system (XAI-CAESDs), comprising three major modules, including of feature engineering module, a seizure detection module, and an explainable decision-making process module in a smart healthcare system. To ensure the privacy and security of biomedical EEG data, the blockchain is employed. Initially, the Butterworth filter eliminates various artifacts, and the Dual-Tree Complex Wavelet Transform (DTCWT) decomposes EEG signals, extracting real and imaginary eigenvalue features using frequency domain (FD), time domain (TD) linear feature, and Fractal Dimension (FD) of non-linear features. The best features are selected by using Correlation Coefficients (CC) and Distance Correlation (DC). The selected features are fed into the Stacking Ensemble Classifiers (SEC) for EEG ES detection. Further, the Shapley Additive Explanations (SHAP) method of XAI is implemented to facilitate the interpretation of predictions made by the proposed approach, enabling medical experts to make accurate and understandable decisions. The proposed Stacking Ensemble Classifiers (SEC) in XAI-CAESDs have demonstrated 2% best average accuracy, recall, specificity, and F1-score using the University of California, Irvine, Bonn University, and Boston Children's Hospital-MIT EEG data sets. The proposed framework enhances decision-making and the diagnosis process using biomedical EEG signals and ensures data security in smart healthcare systems.
ISSN:2168-2194
2168-2208
2168-2208
DOI:10.1109/JBHI.2024.3366341