Loading…
Dynamic image super-resolution via progressive contrastive self-distillation
Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we p...
Saved in:
Published in: | Pattern recognition 2024-09, Vol.153, p.110502, Article 110502 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c255t-fbb5c1e2a0ca579b34c0e8bda7688a7cd8b771061a1ba42bdb7d73ef4e6c4d6e3 |
container_end_page | |
container_issue | |
container_start_page | 110502 |
container_title | Pattern recognition |
container_volume | 153 |
creator | Zhang, Zhizhong Xie, Yuan Zhang, Chong Wang, Yanbo Qu, Yanyun Lin, Shaohui Ma, Lizhuang Tian, Qi |
description | Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
[Display omitted]
•A novel dynamic contrastive self-distillation (Dynamic-CSD) framework was proposed.•Dynamic-CSD can simultaneously compress and accelerate various SR models.•The Pro-CSD scheme further improves the performance of our CSD scheme.•We combined the dynamic inference with multi-branch SR models trained by Pro-CSD.•Dynamic-CSD allocate resources according to input, making top performance and speed. |
doi_str_mv | 10.1016/j.patcog.2024.110502 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_patcog_2024_110502</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S003132032400253X</els_id><sourcerecordid>S003132032400253X</sourcerecordid><originalsourceid>FETCH-LOGICAL-c255t-fbb5c1e2a0ca579b34c0e8bda7688a7cd8b771061a1ba42bdb7d73ef4e6c4d6e3</originalsourceid><addsrcrecordid>eNp9kMtKxDAUhoMoOI6-gYu-QGpOkjZ1I8h4hQE3ug65nA4ZOk1JOgPz9rbUtatz_X9-PkLugZXAoH7Yl4MZXdyVnHFZArCK8QuygkYJWoHkl2TFmAAqOBPX5CbnPWOgpsOKbF_OvTkEV4SD2WGRjwMmmjDH7jiG2BenYIohxd20yuGEhYv9mEwe5z5j11IfpqHrzPx9S65a02W8-6tr8vP2-r35oNuv98_N85Y6XlUjba2tHCA3zJlKPVohHcPGeqPqpjHK-cYqBawGA9ZIbr1VXglsJdZO-hrFmsjF16WYc8JWD2nKn84amJ6J6L1eiOiZiF6ITLKnRYZTtlPApLML2Dv0IaEbtY_hf4Nf8etugg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dynamic image super-resolution via progressive contrastive self-distillation</title><source>ScienceDirect Journals</source><creator>Zhang, Zhizhong ; Xie, Yuan ; Zhang, Chong ; Wang, Yanbo ; Qu, Yanyun ; Lin, Shaohui ; Ma, Lizhuang ; Tian, Qi</creator><creatorcontrib>Zhang, Zhizhong ; Xie, Yuan ; Zhang, Chong ; Wang, Yanbo ; Qu, Yanyun ; Lin, Shaohui ; Ma, Lizhuang ; Tian, Qi</creatorcontrib><description>Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
[Display omitted]
•A novel dynamic contrastive self-distillation (Dynamic-CSD) framework was proposed.•Dynamic-CSD can simultaneously compress and accelerate various SR models.•The Pro-CSD scheme further improves the performance of our CSD scheme.•We combined the dynamic inference with multi-branch SR models trained by Pro-CSD.•Dynamic-CSD allocate resources according to input, making top performance and speed.</description><identifier>ISSN: 0031-3203</identifier><identifier>EISSN: 1873-5142</identifier><identifier>DOI: 10.1016/j.patcog.2024.110502</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Dynamic neural networks ; Model acceleration ; Model compression ; Single Image Super-Resolution</subject><ispartof>Pattern recognition, 2024-09, Vol.153, p.110502, Article 110502</ispartof><rights>2024 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c255t-fbb5c1e2a0ca579b34c0e8bda7688a7cd8b771061a1ba42bdb7d73ef4e6c4d6e3</cites><orcidid>0000-0001-6945-7437 ; 0009-0002-4481-5677</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Zhang, Zhizhong</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><creatorcontrib>Zhang, Chong</creatorcontrib><creatorcontrib>Wang, Yanbo</creatorcontrib><creatorcontrib>Qu, Yanyun</creatorcontrib><creatorcontrib>Lin, Shaohui</creatorcontrib><creatorcontrib>Ma, Lizhuang</creatorcontrib><creatorcontrib>Tian, Qi</creatorcontrib><title>Dynamic image super-resolution via progressive contrastive self-distillation</title><title>Pattern recognition</title><description>Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
[Display omitted]
•A novel dynamic contrastive self-distillation (Dynamic-CSD) framework was proposed.•Dynamic-CSD can simultaneously compress and accelerate various SR models.•The Pro-CSD scheme further improves the performance of our CSD scheme.•We combined the dynamic inference with multi-branch SR models trained by Pro-CSD.•Dynamic-CSD allocate resources according to input, making top performance and speed.</description><subject>Dynamic neural networks</subject><subject>Model acceleration</subject><subject>Model compression</subject><subject>Single Image Super-Resolution</subject><issn>0031-3203</issn><issn>1873-5142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtKxDAUhoMoOI6-gYu-QGpOkjZ1I8h4hQE3ug65nA4ZOk1JOgPz9rbUtatz_X9-PkLugZXAoH7Yl4MZXdyVnHFZArCK8QuygkYJWoHkl2TFmAAqOBPX5CbnPWOgpsOKbF_OvTkEV4SD2WGRjwMmmjDH7jiG2BenYIohxd20yuGEhYv9mEwe5z5j11IfpqHrzPx9S65a02W8-6tr8vP2-r35oNuv98_N85Y6XlUjba2tHCA3zJlKPVohHcPGeqPqpjHK-cYqBawGA9ZIbr1VXglsJdZO-hrFmsjF16WYc8JWD2nKn84amJ6J6L1eiOiZiF6ITLKnRYZTtlPApLML2Dv0IaEbtY_hf4Nf8etugg</recordid><startdate>202409</startdate><enddate>202409</enddate><creator>Zhang, Zhizhong</creator><creator>Xie, Yuan</creator><creator>Zhang, Chong</creator><creator>Wang, Yanbo</creator><creator>Qu, Yanyun</creator><creator>Lin, Shaohui</creator><creator>Ma, Lizhuang</creator><creator>Tian, Qi</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-6945-7437</orcidid><orcidid>https://orcid.org/0009-0002-4481-5677</orcidid></search><sort><creationdate>202409</creationdate><title>Dynamic image super-resolution via progressive contrastive self-distillation</title><author>Zhang, Zhizhong ; Xie, Yuan ; Zhang, Chong ; Wang, Yanbo ; Qu, Yanyun ; Lin, Shaohui ; Ma, Lizhuang ; Tian, Qi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c255t-fbb5c1e2a0ca579b34c0e8bda7688a7cd8b771061a1ba42bdb7d73ef4e6c4d6e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Dynamic neural networks</topic><topic>Model acceleration</topic><topic>Model compression</topic><topic>Single Image Super-Resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Zhizhong</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><creatorcontrib>Zhang, Chong</creatorcontrib><creatorcontrib>Wang, Yanbo</creatorcontrib><creatorcontrib>Qu, Yanyun</creatorcontrib><creatorcontrib>Lin, Shaohui</creatorcontrib><creatorcontrib>Ma, Lizhuang</creatorcontrib><creatorcontrib>Tian, Qi</creatorcontrib><collection>CrossRef</collection><jtitle>Pattern recognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Zhizhong</au><au>Xie, Yuan</au><au>Zhang, Chong</au><au>Wang, Yanbo</au><au>Qu, Yanyun</au><au>Lin, Shaohui</au><au>Ma, Lizhuang</au><au>Tian, Qi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic image super-resolution via progressive contrastive self-distillation</atitle><jtitle>Pattern recognition</jtitle><date>2024-09</date><risdate>2024</risdate><volume>153</volume><spage>110502</spage><pages>110502-</pages><artnum>110502</artnum><issn>0031-3203</issn><eissn>1873-5142</eissn><abstract>Convolutional neural networks (CNNs) are highly successful for image super-resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel-splitting super-resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
[Display omitted]
•A novel dynamic contrastive self-distillation (Dynamic-CSD) framework was proposed.•Dynamic-CSD can simultaneously compress and accelerate various SR models.•The Pro-CSD scheme further improves the performance of our CSD scheme.•We combined the dynamic inference with multi-branch SR models trained by Pro-CSD.•Dynamic-CSD allocate resources according to input, making top performance and speed.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.patcog.2024.110502</doi><orcidid>https://orcid.org/0000-0001-6945-7437</orcidid><orcidid>https://orcid.org/0009-0002-4481-5677</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0031-3203 |
ispartof | Pattern recognition, 2024-09, Vol.153, p.110502, Article 110502 |
issn | 0031-3203 1873-5142 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_patcog_2024_110502 |
source | ScienceDirect Journals |
subjects | Dynamic neural networks Model acceleration Model compression Single Image Super-Resolution |
title | Dynamic image super-resolution via progressive contrastive self-distillation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T13%3A19%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20image%20super-resolution%20via%20progressive%20contrastive%20self-distillation&rft.jtitle=Pattern%20recognition&rft.au=Zhang,%20Zhizhong&rft.date=2024-09&rft.volume=153&rft.spage=110502&rft.pages=110502-&rft.artnum=110502&rft.issn=0031-3203&rft.eissn=1873-5142&rft_id=info:doi/10.1016/j.patcog.2024.110502&rft_dat=%3Celsevier_cross%3ES003132032400253X%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c255t-fbb5c1e2a0ca579b34c0e8bda7688a7cd8b771061a1ba42bdb7d73ef4e6c4d6e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |