Loading…

Heterogeneous Knowledge Distillation Using Conceptual Learning

Recent advances in deep learning have led to the development of large, high-performing models that have been pretrained on massive datasets. However, employing these models in real-world services requires fast inference speed and low computational complexity. This has driven an interest in model com...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2024, Vol.12, p.52803-52814
Main Authors: Yu, Yerin, Kim, Namgyu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in deep learning have led to the development of large, high-performing models that have been pretrained on massive datasets. However, employing these models in real-world services requires fast inference speed and low computational complexity. This has driven an interest in model compression techniques, such as knowledge distillation, which transfers the knowledge learned by a teacher model to a smaller student model. However, traditional knowledge distillation models are limited in that the student learns from the teacher model only the knowledge needed to solve the given problem. Therefore, giving an appropriate answer to a case that has not been encountered yet is difficult. In this study, we propose a heterogeneous knowledge distillation method that distills knowledge through a teacher model that has obtained knowledge about higher concepts, not the knowledge that needs to be obtained. The proposed methodology is based on the pedagogical discovery that problems can be solved better by learning not only specific knowledge about the problem but also general knowledge of higher concepts. In particular, beyond the limitations where traditional knowledge distillation was only capable of transferring knowledge for the same task, one can anticipate performance enhancement in lightweight models and extended applicability of pre-trained teacher models through the transfer of heterogeneous knowledge using the proposed methodology. In addition, through classification experiments on 70,000 images from the machine learning benchmark dataset Fashion-MNIST, we confirmed that the proposed heterogeneous knowledge distillation methodology achieves superior performance in terms of classification accuracy than does traditional knowledge distillation.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3387459