Loading…

Dynamic image super-resolution via progressive contrastive self-distillation

Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we p...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2024-09, Vol.153, p.110502, Article 110502
Main Authors: Zhang, Zhizhong, Xie, Yuan, Zhang, Chong, Wang, Yanbo, Qu, Yanyun, Lin, Shaohui, Ma, Lizhuang, Tian, Qi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. [Display omitted] •A novel dynamic contrastive self-distillation (Dynamic-CSD) framework was proposed.•Dynamic-CSD can simultaneously compress and accelerate various SR models.•The Pro-CSD scheme further improves the performance of our CSD scheme.•We combined the dynamic inference with multi-branch SR models trained by Pro-CSD.•Dynamic-CSD allocate resources according to input, making top performance and speed.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2024.110502