Loading…
Learning Frequency-aware Dynamic Network for Efficient Super-Resolution
Deep learning based methods, especially convolutional neural networks (CNNs) have been successfully applied in the field of single image super-resolution (SISR). To obtain better fidelity and visual quality, most of existing networks are of heavy design with massive computation. However, the computa...
Saved in:
Published in: | arXiv.org 2021-08 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Xie, Wenbin Song, Dehua Chang, Xu Xu, Chunjing Zhang, Hui Wang, Yunhe |
description | Deep learning based methods, especially convolutional neural networks (CNNs) have been successfully applied in the field of single image super-resolution (SISR). To obtain better fidelity and visual quality, most of existing networks are of heavy design with massive computation. However, the computation resources of modern mobile devices are limited, which cannot easily support the expensive cost. To this end, this paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain. In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden. Since pixels or image patches belong to low-frequency areas contain relatively few textural details, this dynamic network will not affect the quality of resulting super-resolution images. In addition, we embed predictors into the proposed dynamic network to end-to-end fine-tune the handcrafted frequency-aware masks. Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity. For instance, we can reduce the FLOPs of SR models by approximate 50% while preserving state-of-the-art SISR performance. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2501662049</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2501662049</sourcerecordid><originalsourceid>FETCH-proquest_journals_25016620493</originalsourceid><addsrcrecordid>eNqNjLEKwjAUAIMgWLT_EHAOpElbddZWB3FQ9xLKi6TWpL4klP69Dn6A0w133IwkQsqMbXMhFiT1vuOci3IjikIm5HgGhdbYB60R3hFsOzE1KgR6mKx6mZZeIIwOn1Q7pJXWpjVgA73FAZBdwbs-BuPsisy16j2kPy7Juq7u-xMb0H23PjSdi2i_qhEFz8pS8Hwn_6s-HZE75A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2501662049</pqid></control><display><type>article</type><title>Learning Frequency-aware Dynamic Network for Efficient Super-Resolution</title><source>Publicly Available Content Database</source><creator>Xie, Wenbin ; Song, Dehua ; Chang, Xu ; Xu, Chunjing ; Zhang, Hui ; Wang, Yunhe</creator><creatorcontrib>Xie, Wenbin ; Song, Dehua ; Chang, Xu ; Xu, Chunjing ; Zhang, Hui ; Wang, Yunhe</creatorcontrib><description>Deep learning based methods, especially convolutional neural networks (CNNs) have been successfully applied in the field of single image super-resolution (SISR). To obtain better fidelity and visual quality, most of existing networks are of heavy design with massive computation. However, the computation resources of modern mobile devices are limited, which cannot easily support the expensive cost. To this end, this paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain. In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden. Since pixels or image patches belong to low-frequency areas contain relatively few textural details, this dynamic network will not affect the quality of resulting super-resolution images. In addition, we embed predictors into the proposed dynamic network to end-to-end fine-tune the handcrafted frequency-aware masks. Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity. For instance, we can reduce the FLOPs of SR models by approximate 50% while preserving state-of-the-art SISR performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computation ; Discrete cosine transform ; Electronic devices ; Image quality ; Image resolution</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2501662049?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Xie, Wenbin</creatorcontrib><creatorcontrib>Song, Dehua</creatorcontrib><creatorcontrib>Chang, Xu</creatorcontrib><creatorcontrib>Xu, Chunjing</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Wang, Yunhe</creatorcontrib><title>Learning Frequency-aware Dynamic Network for Efficient Super-Resolution</title><title>arXiv.org</title><description>Deep learning based methods, especially convolutional neural networks (CNNs) have been successfully applied in the field of single image super-resolution (SISR). To obtain better fidelity and visual quality, most of existing networks are of heavy design with massive computation. However, the computation resources of modern mobile devices are limited, which cannot easily support the expensive cost. To this end, this paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain. In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden. Since pixels or image patches belong to low-frequency areas contain relatively few textural details, this dynamic network will not affect the quality of resulting super-resolution images. In addition, we embed predictors into the proposed dynamic network to end-to-end fine-tune the handcrafted frequency-aware masks. Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity. For instance, we can reduce the FLOPs of SR models by approximate 50% while preserving state-of-the-art SISR performance.</description><subject>Artificial neural networks</subject><subject>Computation</subject><subject>Discrete cosine transform</subject><subject>Electronic devices</subject><subject>Image quality</subject><subject>Image resolution</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjLEKwjAUAIMgWLT_EHAOpElbddZWB3FQ9xLKi6TWpL4klP69Dn6A0w133IwkQsqMbXMhFiT1vuOci3IjikIm5HgGhdbYB60R3hFsOzE1KgR6mKx6mZZeIIwOn1Q7pJXWpjVgA73FAZBdwbs-BuPsisy16j2kPy7Juq7u-xMb0H23PjSdi2i_qhEFz8pS8Hwn_6s-HZE75A</recordid><startdate>20210816</startdate><enddate>20210816</enddate><creator>Xie, Wenbin</creator><creator>Song, Dehua</creator><creator>Chang, Xu</creator><creator>Xu, Chunjing</creator><creator>Zhang, Hui</creator><creator>Wang, Yunhe</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210816</creationdate><title>Learning Frequency-aware Dynamic Network for Efficient Super-Resolution</title><author>Xie, Wenbin ; Song, Dehua ; Chang, Xu ; Xu, Chunjing ; Zhang, Hui ; Wang, Yunhe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25016620493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Computation</topic><topic>Discrete cosine transform</topic><topic>Electronic devices</topic><topic>Image quality</topic><topic>Image resolution</topic><toplevel>online_resources</toplevel><creatorcontrib>Xie, Wenbin</creatorcontrib><creatorcontrib>Song, Dehua</creatorcontrib><creatorcontrib>Chang, Xu</creatorcontrib><creatorcontrib>Xu, Chunjing</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Wang, Yunhe</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xie, Wenbin</au><au>Song, Dehua</au><au>Chang, Xu</au><au>Xu, Chunjing</au><au>Zhang, Hui</au><au>Wang, Yunhe</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning Frequency-aware Dynamic Network for Efficient Super-Resolution</atitle><jtitle>arXiv.org</jtitle><date>2021-08-16</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Deep learning based methods, especially convolutional neural networks (CNNs) have been successfully applied in the field of single image super-resolution (SISR). To obtain better fidelity and visual quality, most of existing networks are of heavy design with massive computation. However, the computation resources of modern mobile devices are limited, which cannot easily support the expensive cost. To this end, this paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain. In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden. Since pixels or image patches belong to low-frequency areas contain relatively few textural details, this dynamic network will not affect the quality of resulting super-resolution images. In addition, we embed predictors into the proposed dynamic network to end-to-end fine-tune the handcrafted frequency-aware masks. Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity. For instance, we can reduce the FLOPs of SR models by approximate 50% while preserving state-of-the-art SISR performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2501662049 |
source | Publicly Available Content Database |
subjects | Artificial neural networks Computation Discrete cosine transform Electronic devices Image quality Image resolution |
title | Learning Frequency-aware Dynamic Network for Efficient Super-Resolution |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T09%3A52%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20Frequency-aware%20Dynamic%20Network%20for%20Efficient%20Super-Resolution&rft.jtitle=arXiv.org&rft.au=Xie,%20Wenbin&rft.date=2021-08-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2501662049%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25016620493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2501662049&rft_id=info:pmid/&rfr_iscdi=true |