Loading…
Variational AdaBoost knowledge distillation for skin lesion classification in dermatology images
Knowledge Distillation has shown promising results for classifying skin lesions in dermatology images. Traditional knowledge distillation typically involves the student model passively mimicking the teacher model's knowledge. We propose utilizing AdaBoost to enable the student to actively mine...
Saved in:
Published in: | Complex & intelligent systems 2024-10, Vol.10 (5), p.6787-6804 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Knowledge Distillation has shown promising results for classifying skin lesions in dermatology images. Traditional knowledge distillation typically involves the student model passively mimicking the teacher model's knowledge. We propose utilizing AdaBoost to enable the student to actively mine the teacher's learning representation for skin lesion classification. This paradigm allows the student to determine the “granularity” in mining the teacher's knowledge. As the student's learning process progresses, it can become challenging to pinpoint specific learning difficulties, especially with potential interference from the teacher. To address this issue, we introduce a variational difficulty mining strategy to reduce the impact of such interference. This strategy involves the distillation module capturing more nuanced classification difficulties by extracting information from the node's
l
th hops. By maximizing the mutual information between the teacher and student, we effectively filter out noise interference from these nuanced difficulties. Our proposed framework, Variational AdaBoost Knowledge Distillation (VAdaKD), allows the student to actively mine and leverage the teacher's knowledge for improved skin lesion classification. Our proposed method performs satisfactorily on three benchmark datasets: the Dermnet dataset, ISIC 2019 dataset, and HAM10000 dataset, respectively. Specifically, our method shows an improvement of 2–3% over the baseline on the Dermnet dataset and outperforms the best results of the other compared methods by 1%. Experimental results and visualization performance indicate that our proposed method effectively captures the learning difficulties and achieves better visualized t-distributed stochastic neighbor embedding classification results. Our code is available at
https://github.com/25brilliant/VAdaKD
. |
---|---|
ISSN: | 2199-4536 2198-6053 |
DOI: | 10.1007/s40747-024-01501-4 |