Loading…

LCRCA: image super-resolution using lightweight concatenated residual channel attention networks

Images that are more similar to the original high-resolution images can be generated by deep neural network-based super-resolution methods than the non-learning-based ones, but the huge and sometimes redundant network structure and parameters make them unbearable. To get high-quality super-resolutio...

Full description

Saved in:
Bibliographic Details
Published in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-07, Vol.52 (9), p.10045-10059
Main Authors: Peng, Changmeng, Shu, Pei, Huang, Xiaoyang, Fu, Zhizhong, Li, Xiaofeng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Images that are more similar to the original high-resolution images can be generated by deep neural network-based super-resolution methods than the non-learning-based ones, but the huge and sometimes redundant network structure and parameters make them unbearable. To get high-quality super-resolution results in computation resource-limited scenarios, we propose a lightweight skip concatenated residual channel attention network, LCRCA for image super-resolution. Specifically, we design a light but efficient deep residual block (DRB) which can generate more precise residual information by using more convolution layers under the same computation budget. To enhance the feature maps of DRB, an improved channel attention mechanism named statistical channel attention (SCA) is proposed by introducing channel statistics. Besides, compared with the commonly used skip connections, we propose to use skip concatenation (SC) to build information flows for feature maps of different layers. Finally, DRB, SCA, and SC are efficiently used to form the proposed network LCRCA. Experiments on four test sets show that our method can gain up to 3.2 dB and 0.12 dB over the bicubic interpolation and the representative lightweight method FERN, respectively, and can recover image details more accurately than the compared algorithms. Code can be found at https://github.com/pengcm/LCRCA .
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-021-02891-5