Loading…
Cross-domain knowledge distillation for text classification
Most text classification methods achieve great success based on the large-scale annotated data and the pre-trained language models. However, the labeled data is insufficient in practice, and the pre-trained language models are difficult to deploy due to their high computing resources and slow infere...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2022-10, Vol.509, p.11-20 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Most text classification methods achieve great success based on the large-scale annotated data and the pre-trained language models. However, the labeled data is insufficient in practice, and the pre-trained language models are difficult to deploy due to their high computing resources and slow inference speed. In this paper, we propose cross-domain knowledge distillation, where the teacher and student tasks belong to different domains. It not only acquires knowledge from multiple teachers but also accelerates inference and reduces model size. Specifically, we train the pre-trained language models on factual knowledge obtained by aligned Wikipedia text to Wikidata triplets and fine-tune it as the teacher model. Then we use the heterogeneous multi-teacher knowledge distillation to transfer knowledge from the multiple teacher models to the student model. Multi-teacher knowledge vote can distill knowledge related to the target domain. Moreover, we also introduce the teacher assistant to help distill large pre-trained language models. Finally, we reduce the difference between the source domain and target domain by multi-source domain adaptation to solve the domain shift problem. Experiments on the multiple public datasets demonstrate that our method can achieve competitive performance while having fewer parameters and less inference time. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2022.08.061 |