Loading…
Non-small cell lung cancer detection through knowledge distillation approach with teaching assistant
Non-small cell lung cancer (NSCLC) exhibits a comparatively slower rate of metastasis in contrast to small cell lung cancer, contributing to approximately 85% of the global patient population. In this work, leveraging CT scan images, we deploy a knowledge distillation technique within teaching assis...
Saved in:
Published in: | PloS one 2024-11, Vol.19 (11), p.e0306441 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Non-small cell lung cancer (NSCLC) exhibits a comparatively slower rate of metastasis in contrast to small cell lung cancer, contributing to approximately 85% of the global patient population. In this work, leveraging CT scan images, we deploy a knowledge distillation technique within teaching assistant (TA) and student frameworks for NSCLC classification. We employed various deep learning models, CNN, VGG19, ResNet152v2, Swin, CCT, and ViT, and assigned roles as teacher, teaching assistant and student. Evaluation underscores exceptional model performance in performance metrics achieved via cost-sensitive learning and precise hyperparameter (alpha and temperature) fine-tuning, highlighting the model's efficiency in lung cancer tumor prediction and classification. The applied TA (ResNet152) and student (CNN) models achieved 90.99% and 94.53% test accuracies, respectively, with optimal hyperparameters (alpha = 0.7 and temperature = 7). The implementation of the TA framework improves the overall performance of the student model. After obtaining Shapley values, explainable AI is applied with a partition explainer to check each class's contribution, further enhancing the transparency of the implemented deep learning techniques. Finally, a web application designed to make it user-friendly and classify lung types in recently captured images. The execution of the three-stage knowledge distillation technique proved efficient with significantly reduced trainable parameters and training time applicable for memory-constrained edge devices. |
---|---|
ISSN: | 1932-6203 1932-6203 |
DOI: | 10.1371/journal.pone.0306441 |